copyleft hardware planet

November 30, 2015

Free Electrons

UN climate conference: switching to “green” electricity

Wind turbines in Denmark

The United Nations 2015 Climate Change Conference is an opportunity for everyone to think about contributing to the transition to renewable and sustainable energy sources.

One way to do that is to buy electricity that is produced from renewable resources (solar, wind, hydro, biomass…). With the worldwide opening of the energy markets, this should now be possible in most parts of the world.

So, with a power consumption between 4,000 and 5,000 kWh per year, we have decided to make the switch for our main office in Orange, France. But how to choose a good supplier?

Greenpeace turned out to be a very good source of information about this topic, comparing the offerings from various suppliers, and finding out which ones really make serious investments in renewable energy sources.

Here are the countries for which we have found Greenpeace rankings:
Australia France

If you find a similar report for your country, please let us know, and we will add it to this list.

Back to our case, we chose Enercoop, a French cooperative company only producing renewable energy. This supplier has by far the best ranking from Greenpeace, and stands out from more traditional suppliers which too often are just trading green certificates, charging consumers a premium rate without investing by themselves in green energy production.

The process to switch to a green electricity supplier was very straightforward. All we needed was an electricity bill and 15 minutes of time, whether you are an individual or represent a company. From now on, Enercoop will guarantee that for every kWh we consume from the power grid, they will inject the same amount of energy into the grid from renewable sources. There is no risk to see more power outages than before, as the national company operating and maintaining the grid stays the same.

It’s true our electricity is going to cost about 20% more than nuclear electricity, but at least, what we spend is going to support local investments in renewable energy sources, that don’t degrade the fragile environment that keeps us alive.

Your comments and own tips are welcome!

by Michael Opdenacker at November 30, 2015 10:37 AM

November 28, 2015

Bunnie Studios

Products over Patents

NPR’s Audrey Quinn from Planet Money explores IP in the age of rapid manufacturing by investigating the two-wheel self balancing scooter. When patent paperwork takes more time and resources than product production, more agile systems of idea sharing evolve to keep up with the new pace of innovation.

If the embedded audio player above isn’t working, try this link. Seems like the embed isn’t working outside the US…

by bunnie at November 28, 2015 11:20 PM

MLTalk with Joi Ito, Nadya Peek and me

I gave an MLTalk at the MIT Media Lab this week, where I disclose a bit more about the genesis of the Orchard platform used to build, among other things, the Burning Man sexually generated light pattern badge I wrote about a couple months back.

The short provocation is followed up by a conversation with Joi Ito, the Director of the Media Lab, and Nadya Peek, a renowned expert in digital fabrication from the CBA (and incidentally, the namesake of the Peek Array in the Novena laptop) about supply chains, digital fabrication, trustability, and things we’d like to see in the future of low volume manufacturing.

I figured I’d throw a link here on the blog to break the monotony of name that wares. Sorry for the lack of new posts, but I’ve been working on a couple of books and magazine articles in the past months (some of which have made it to print: IEEE Spectrum, Wired) which have consumed most of my capacity for creative writing.

by bunnie at November 28, 2015 12:50 AM

Name that Ware November 2015

This month’s ware is shown below:

And below are views of the TO-220 devices which are folded over in the top-down photo:

We continue this month with the campaign to get Nava Whiteford permission to buy a SEM. Thanks again to Nava for providing another interesting ware!

by bunnie at November 28, 2015 12:22 AM

Winner, Name that Ware October 2015

The ware for October 2015 was a Lecroy LT342L. Nava notes that it was actually manufactured by Iwatsu, but the ASICs on the inside all say LeCroy. Congrats to Carl Smith for nailing it, email me for your prize and happy Thanksgiving!

by bunnie at November 28, 2015 12:22 AM

November 19, 2015

Geoffrey L. Barrows - DIY Drones

360 degree stereo vision and obstacle avoidance on a Crazyflie nano quadrotor

(More info and full post here)

I've been experimenting with putting 360 degree vision, including stereo vision, onto a Crazyflie nano quadrotor to assist with flight in near-Earth and indoor environments. Four stereo boards, each holding two image sensor chips and lenses, together see in all directions except up and down. We developed the image sensor chips and lenses in-house for this work, since there is nothing available elsewhere that is suitable for platforms of this size. The control processor (on the square PCB in the middle) uses optical flow for position control and stereo vision for obstacle avoidance. The system uses a "supervised autonomy" control scheme in which the operator gives high level commands via control sticks (e.g. "move this general direction") and the control system implements the maneuver while avoiding nearby obstacles. All sensing and processing is performed on board. The Crazyflie itself was unmodified other than a few lines of code in it's firmware to get the target Euler angles and throttle from the vision system.

Below is a video from a few flights in an indoor space. This is best viewed on a laptop or desktop computer to see the annotations in the video. The performance is not perfect, but much better than the pure "hover in place" systems I had flown in the past since obstacles are now avoided. I would not have been able to fly in the last room without the vision system to assist me! There are still obvious shortcomings- for example the stereo vision currently does not respond to blank walls- but we'll address this soon...

by Geoffrey L. Barrows at November 19, 2015 11:28 PM

November 15, 2015

Harald Welte

GSM test network at 32C3, after all

Contrary to my blog post yesterday, it looks like we will have a private GSM network at the CCC congress again, after all.

It appears that Vodafone Germany (who was awarded the former DECT guard band in the 2015 spectrum auctions) is not yet using it in December, and they agreed that we can use it at the 32C3.

With this approval from Vodafone Germany we can now go to the regulator (BNetzA) and obtain the usual test license. Given that we used to get the license in the past, and that Vodafone has agreed, this should be a mere formality.

For the German language readers who appreciate the language of the administration, it will be a Frequenzzuteilung für Versuchszwecke im nichtöffentlichen mobilen Landfunk.

So thanks to Vodafone Germany, who enabled us at least this time to run a network again. By end of 2016 you can be sure they will have put their new spectrum to use, so I'm not that optimistic that this would be possible again.

by Harald Welte at November 15, 2015 11:00 PM

No GSM test network at 32C3

I currently don't assume that there will be a GSM network at the 32C3.

Ever since OpenBSC was created in 2008, the annual CCC congress was a great opportunity to test OpenBSC and related software with thousands of willing participants. In order to do so, we obtained a test licence from the German regulatory authority. This was never any problem, as there was a chunk of spectrum in the 1800 MHz GSM band that was not allocated to any commercial operator, the so-called DECT guard band. It's called that way as it was kept free in order to ensure there is no interference between 1800 MHz GSM and the neighboring DECT cordless telephones.

Over the decades, it was determined on a EU level that this guard band might not be necessary, or at least not if certain considerations are taken for BTSs deployed in that band.

When the German regulatory authority re-auctioned the GSM spectrum earlier this year, they decided to also auction the frequencies of the former DECT guard band. The DECT guard band was awarded to Vodafone.

This is a pity, as this means that people involved with cellular research or development of cellular technology now have it significantly harder to actually test their systems.

In some other EU member states it is easier, like in the Netherlands or the UK, where the DECT guard band was not treated like any other chunk of the GSM bands, but put under special rules. Not so in Germany.

To make a long story short: Without the explicit permission of any of the commercial mobile operators, it is not possible to run a test/experimental network like we used to ran at the annual CCC congress.

Given that

  • the event is held in the city center (where frequencies are typically used and re-used quite densely), and
  • an operator has nothing to gain from permitting us to test our open source GSM/GPRS implementations,

I think there is little chance that this will become a reality.

If anyone has really good contacts to the radio network planning team of a German mobile operator and wants to prove me wrong: Feel free to contact me by e-mail.

Thanks to everyone involved with the GSM team at the CCC events, particularly Holger Freyther, Daniel Willmann, Stefan Schmidt, Jan Luebbe, Peter Stuge, Sylvain Munaut, Kevin Redon, Andreas Eversberg, Ulli (and everyone else whom I may have forgot, my apologies). It's been a pleasure!

Thanks also to our friends at the POC (Phone Operation Center) who have provided interfacing to the DECT, ISDN, analog and VoIP network at the events. Thanks to roh for helping with our special patch requests. Thanks also to those entities and people who borrowed equipment (like BTSs) in the pre-sysmocom years.

So long, and thanks for all the fish!

by Harald Welte at November 15, 2015 11:00 PM

Progress on the Linux kernel GTP code

It is always sad if you start to develop some project and then never get around finishing it, as there are too many things to take care in parallel. But then, days only have 24 hours...

Back in 2012 I started to write some generic Linux kernel GTP tunneling code. GTP is the GPRS Tunneling Protocol, a protocol between core network elements in GPRS networks, later extended to be used in UMTS and even LTE networks.

GTP is split in a control plane for management and the user plane carrying the actual user IP traffic of a mobile subscriber. So if you're reading this blog via a cellular interent connection, your data is carried in GTP-U within the cellular core network.

To me as a former Linux kernel networking developer, the user plane of GTP (GTP-U) had always belonged into kernel space. It is a tunneling protocol not too different from many other tunneling protocols that already exist (GRE, IPIP, L2TP, PPP, ...) and for the user plane, all it does is basically add a header in one direction and remove the header in the other direction. User data, particularly in networks with many subscribers and/or high bandwidth use.

Also, unlike many other telecom / cellular protocols, GTP is an IP-only protocol with no E1, Frame Relay or ATM legacy. It also has nothing to do with SS7, nor does it use ASN.1 syntax and/or some exotic encoding rules. In summary, it is nothing like any other GSM/3GPP protocol, and looks much more of what you're used from the IETF/Internet world.

Unfortunately I didn't get very far with my code back in 2012, but luckily Pablo Neira (one of my colleagues from netfilter/iptables days) picked it up and brought it along. However, for some time it has been stalled until recently it was thankfully picked up by Andreas Schultz and now receives some attention and discussion, with the clear intention to finish + submit it for mainline inclusion.

The code is now kept in a git repository at

Thanks to Pablo and Andreas for picking this up, let's hope this is the last coding sprint before it goes mainline and gets actually used in production.

by Harald Welte at November 15, 2015 11:00 PM

Osmocom Berlin meetings

Back in 2012, I started the idea of having a regular, bi-weekly meeting of people interested in mobile communications technology, not only strictly related to the Osmocom projects and software. This was initially called the Osmocom User Group Berlin. The meetings were held twice per month in the rooms of the Chaos Computer Club Berlin.

There are plenty of people that were or still are involved with Osmocom one way or another in Berlin. Think of zecke, alphaone, 2b-as, kevin, nion, max, prom, dexter, myself - just to name a few.

Over the years, I got "too busy" and was no longer able to attend regularly. Some people kept it alive (thanks to dexter!), but eventually they were discontinued in 2013.

Starting in October 2015, I started a revival of the meetings, two have been held already, the third is coming up next week on November 11.

I'm happy that I had the idea of re-starting the meeting. It's good to meet old friends and new people alike. Both times there actually were some new faces around, most of which even had a classic professional telecom background.

In order to emphasize the focus is strictly not on Osmocom alone ( particularly not about its users only), I decided to rename the event to the Osmocom Meeting Berlin.

If you're in Berlin and are interested in mobile communications technology on the protocol and radio side of things, feel free to join us next Wednesday.

by Harald Welte at November 15, 2015 11:00 PM

Germany's excessive additional requirements for VAT-free intra-EU shipments


At my company sysmocom we are operating a small web-shop providing small tools and accessories for people interested in mobile research. This includes programmable SIM cards, SIM card protocol tracers, adapter cables, duplexers for cellular systems, GPS disciplined clock units, and other things we consider useful to people in and around the various Osmocom projects.

We of course ship domestic, inside the EU and world-wide. And that's where the trouble starts, at least since 2014.

What are VAT-free intra-EU shipments?

As many readers of this blog (at least the European ones) know, inside the EU there is a system by which intra-EU sales between businesses in EU member countries are performed without charging VAT.

This is the result of different countries having different amount of VAT, and the fact that a business can always deduct the VAT it spends on its purchases from the VAT it has to charge on its sales. In order to avoid having to file VAT return statements in each of the countries of your suppliers, the suppliers simply ship their goods without charging VAT in the first place.

In order to have checks and balances, both the supplier and the recipient have to file declarations to their tax authorities, indicating the sales volume and the EU VAT ID of the respective business partners.

So far so good. This concept was reasonably simple to implement and it makes the life easier for all involved businesses, so everyone participates in this scheme.

Of course there always have been some obstacles, particularly here in Germany. For example, you are legally required to confirm the EU-VAT-ID of the buyer before issuing a VAT-free invoice. This confirmation request can be done online

However, the Germany tax authorities invented something unbelievable: A Web-API for confirmation of EU-VAT-IDs that has opening hours. Despite this having rightfully been at the center of ridicule by the German internet community for many years, it still remains in place. So there are certain times of the day where you cannot verify EU-VAT-IDs, and thus cannot sell products VAT-free ;)

But even with that one has gotten used to live.


Now in recent years (since January 1st, 2014) , the German authorities came up with the concept of the Gelangensbescheinigung. To the German reader, this newly invented word already sounds ugly enough. Literal translation is difficult, as it sounds really clumsy. Think of something like a reaching-its-destination-certificate

So now it is no longer sufficient to simply verify the EU-VAT-ID of the buyer, issue the invoice and ship the goods, but you also have to produce such a Gelangensbescheinigung for each and every VAT-free intra-EU shipment. This document needs to include

  • the name and address of the recipient
  • the quantity and designation of the goods sold
  • the place and month when the goods were received
  • the date of when the document was signed
  • the signature of the recipient (not required in case of an e-mail where the e-mail headers show that the messages was transmitted from a server under control of the recipient)

How can you produce such a statement? Well, in the ideal / legal / formal case, you provide a form to your buyer, which he then signs and certifies that he has received the goods in the destination country.

First of all, I find if offensive that I have to ask my customers to make such declarations in the first place. And then even if I accept this and go ahead with it, it is my legal responsibility to ensure that he actually fills this in.

What if the customer doesn't want to fill it in or forgets about it?

Then I as the seller am liable to pay 19% VAT on the purchase he made, despite me never having charged those 19%.

So not only do I have to generate such forms and send them with my goods, but I also need a business process of checking for their return, reminding the customers that their form has not yet been returned, and in the end they can simply not return it and I loose money. Great.

Track+Trace / Courier Services

Now there are some alternate ways in which a Gelangensbescheinigung can be generated. For example by a track+trace protocol of the delivery company. However, the requirements to this track+trace protocol are so high, that at least when I checked in late 2013, the track and trace protocol of UPS did not fulfill the requirements. For example, a track+trace protocol usually doesn't show the quantity and designation of goods. Why would it? UPS just moves a package from A to B, and there is no customs involved that would require to know what's in the package.

Postal Packages

Now let's say you'd like to send your goods by postal service. For low-priced non-urgent goods, that's actually what you generally want to do, as everything else is simply way too expensive compared to the value of the goods.

However, this is only permitted, if the postal service you use produces you with a receipt of having accepted your package, containing the following mandatory information:

  • name and address of the entity issuing the receipt
  • name and address of the sender
  • name and address of the recipient
  • quantity and type of goods
  • date of having receive the goods

Now I don't know how this works in other countries, but in Germany you will not be able to get such a receipt form the postal office.

In fact I inquired several times with the legal department of Deutsche Post, up to the point of sending a registered letter (by Deutsche Post) to Deutsche Post. They have never responded to any of those letters!

So we have the German tax authorities claiming yes, of course you can still do intra-EU shipments to other countries by postal services, you just need to provide a receipt, but then at the same time they ask for a receipt indicating details that no postal receipt would ever show.

Particularly a postal receipt would never confirm what kind of goods you are sending. How would the postal service know? You hand them a package, and they transfer it. It is - rightfully - none of their business what its content may be. So how can you ask them to confirm that certain goods were received for transport ?!?


So in summary:

Since January 1st, 2014, we now have German tax regulations in force that make VAT free intra-EU shipments extremely difficult to impossible

  • The type of receipt they require from postal services is not provided by Deutsche Post, thereby making it impossible to use Deutsche Post for VAT free intra-EU shipments
  • The type of track+trace protocol issued by UPS does not fulfill the requirements, making it impossible to use them for VAT-free intra-EU shipments
  • The only other option is to get an actual receipt from the customer. If that customer doesn't want to provide this, the German seller is liable to pay the 19% German VAT, despite never having charged that to his customer


To me, the conclusion of all of this can only be one:

German tax authorities do not want German sellers to sell VAT-free goods to businesses in other EU countries. They are actively trying to undermine the VAT principles of the EU. And nobody seem to complain about it or even realize there is a problem.

What a brave new world we live in.

by Harald Welte at November 15, 2015 11:00 PM

small tools: rtl8168-eeprom

Some time ago I wrote a small Linux command line utility that can be used to (re)program the Ethernet (MAC) address stored in the EEPROM attached to an RTL8168 Ethernet chip.

This is for example useful if you are a system integrator that has its own IEEE OUI range and you would like to put your own MAC address in devices that contain the said Realtek etherent chips (already pre-programmed with some other MAC address).

The source code can be obtaned from:

by Harald Welte at November 15, 2015 11:00 PM

small tools: gpsdate

In 2013 I wrote a small Linux program that can be usded to set the system clock based on the clock received from a GPS receiver (via gpsd), particularly when a system is first booted. It is similar in purpose to ntpdate, but of course obtains time not from ntp but from the GPS receiver.

This is particularly useful for RTC-less systems without network connectivity, which come up with a completely wrong system clock that needs to be properly set as soon as th GPS receiver finally has acquired a signal.

I asked the ntp hackers if they were interested in merging it into the official code base, and their response was (summarized) that with a then-future release of ntpd this would no longer be needed. So the gpsdate program remains an external utility.

So in case anyone else might find the tool interesting: The source code can be obtained from

by Harald Welte at November 15, 2015 11:00 PM

Deutsche Bank / unstable interfaces

Deutsche Bank is a large, international bank. They offer services world-wide and are undoubtedly proud of their massive corporate IT department.

Yet, at the same time, they fail to get the most fundamental principles of user/customer-visible interfaces wrong: Don't change them. If you need to change them, manage the change carefully.

In many software projects, keeping the API or other interface stable is paramount. Think of the Linux kernel, where breaking a userspace-visible interface is not permitted. The reasons are simple: If you break that interface, _everyone_ using that interface will need to change their implementation, and will have to synchronize that with the change on the other side of the interface.

The internet online banking system of Deutsche Bank in Germany permits the upload of transactions by their customers in a CSV file format.

And guess what? They change the file format from one day to the other.

  • without informing their users in advance, giving them time to adopt their implementations of that interface
  • without documenting the exact nature of the change
  • adding new fields to the CSV in the middle of the line, rather than at the end of the line, to make sure things break even more

Now if you're running a business and depend on automatizing your payments using the interface provided by Deutsche Bank, this means that you fail to pay your suppliers in time, you hastily drop/delay other (paid!) work that you have to do in order to try to figure out what exactly Deutsche Bank decided to change completely unannounced, from one day to the other.

If at all, I would have expected this from a hobbyist kind of project. But seriously, from one of the worlds' leading banks? An interface that is probably used by thousands and thousands of users? WTF?!?

by Harald Welte at November 15, 2015 11:00 PM

The VMware GPL case

My absence from blogging meant that I didn't really publicly comment on the continued GPL violations by VMware, and the 2015 legal case that well-known kernel developer Christoph Hellwig has brought forward against VMware.

The most recent update by the Software Freedom Conservancy on the VMware GPL case can be found at

In case anyone ever doubted: I of course join the ranks of the long list of Linux developers and other stakeholders that consider VMware's behavior completely unacceptable, if not outrageous.

For many years they have been linking modified Linux kernel device drivers and entire kernel subsystems into their proprietary vmkernel software (part of ESXi). As an excuse, they have added a thin shim layer under GPLv2 which they call vmklinux. And to make all of this work, they had to add lots of vmklinux specific API to the proprietary vmkernel. All the code runs as one program, in one address space, in the same thread of execution. So basically, it is at the level of the closest possible form of integration between two pieces of code: Function calls within the same thread/process.

In order to make all this work, they had to modify their vmkernel, implement vmklinux and also heavily modify the code they took from Linux in the first place. So the drivers are not usable with mainline linux anymore, and vmklinux is not usable without vmkernel either.

If all the above is not a clear indication that multiple pieces of code form one work/program (and subsequently must be licensed under GNU GPLv2), what should ever be considered that?

To me, it is probably one of the strongest cases one can find about the question of derivative works and the GPL(v2). Of course, all my ramblings have no significance in a court, and the judge may rule based on reports of questionable technical experts. But I'm convinced if the court was well-informed and understood the actual situation here, it would have to rule in favor of Christoph Hellwig and the GPL.

What I really don't get is why VMware puts up the strongest possible defense one can imagine. Not only did they not back down in lengthy out-of-court negotiations with the Software Freedom Conservancy, but also do they defend themselves strongly against the claims in court.

In my many years of doing GPL enforcement, I've rarely seen such a dedication and strong opposition. This shows the true nature of VMware as a malicious, unfair entity that gives a damn sh*t about other peoples' copyright, the Free Software community and its code of conduct as a whole, and the Linux kernel developers in particular.

So let's hope they waste a lot of money in their legal defense, get a sufficient amount of negative PR out of this to the point of tainting their image, and finally obtain a ruling upholding the GPL.

All the best to Christoph and the Conservancy in fighting this fight. For those readers that want to help their cause, I believe they are looking for more supporter donations.

by Harald Welte at November 15, 2015 11:00 PM

What I've been busy with

Those who don't know me personally and/or stay in touch more closely might be wondering what on earth happened to Harald in the last >= 1 year?

The answer would be long, but I can summarize it to I disappeared into sysmocom. You know, the company that Holger and I founded four years ago, in order to commercially support OpenBSC and related projects, and to build products around it.

In recent years, the team has been growing to the point where in 2015 we had suddenly 9 employees and a handful of freelancers working for us.

But then, that's still a small company, and based on the projects we're involved, that team has to cover a variety of topics (next to the actual GSM/GPRS related work), including

  • mechanical engineering (enclosure design)
  • all types of electrical engineering
    • AC/electrical wiring/fusing on DIN rails
    • AC/DC and isolated DC/DC power supplies (based on modules)
    • digital design
    • analog design
    • RF design
  • prototype manufacturing and testing
  • software development
    • bare-iron bootloader/os/application on Cortex-M0
    • NuttX on Cortex-M3
    • OpenAT applications on Sierra Wireless
    • custom flavors of Linux on several different ARM architectures (TI DaVinci, TI Sitara)
    • drivers for various peripherals including Ethernet Switches, PoE PSE controller
    • lots of system-level software for management, maintenance, control

I've been involved in literally all of those topics, with most of my time spent on the electronics side than on the software side. And if software, the more on the bootloader/RTOS side, than on applications.

So what did we actually build? It's unfortunately still not possible to disclose fully at this point, but it was all related to marine communications technology. GSM being one part of it, but only one of many in the overall picture.

Given the quite challenging breadth/width of the tasks at hand and problem to solve, I'm actually surprised how much we could achieve with such a small team in a limited amount of time. But then, there's virtually no time left, which meant no work, no blogging, no progress on the various Osmocom Erlang projects for core network protocols, and last but not least no Taiwan holidays this year.

ately I see light at the end of the tunnel, and there is again a bit ore time to get back to old habits, and thus I

  • resurrected this blog from the dead
  • resurrected various project homepages that have disappeared
  • started some more work on actual telecom stuff (osmo-iuh, for example)
  • restarted the Osmocom Berlin Meeting

by Harald Welte at November 15, 2015 11:00 PM

Weblog + homepage online again

On October 31st, 2014, I had reeboote my main server for a kernel upgrade, and could not mount the LUKS crypto volume ever again. While the techincal cause for this remains a mystery until today (it has spawned some conspiracy theories), I finally took some time to recover some bits and pieces from elsewhere. I didn't want this situation to drag on for more than a year...

Rather than bringing online the old content using sub-optimal and clumsy tools to generate static content (web sites generated by docbook-xml, blog by blosxom), I decided to give it a fresh start and try nikola, a more modern and actively maintained tool to generate static web pages and blogs.

The blog is now available at (a redirect from the old /weblog is in place, for those who keep broken links for more than 12 months). The RSS feed URLs are different from before, but there are again per-category feeds so people (and planets) can subscribe to the respective category they're interested in.

And yes, I do plan to blog again more regularly, to make this place not just an archive of a decade of blogging, but a place that is alive and thrives with new content.

My personal web site is available at while my (similarly re-vamped) freelancing business web site is also available again at

I still need to decide what to do about the old site. It still has its old manual web 1.0 structure from the late 1990ies.

I've also re-surrected and as well as (old content). Next in line is, which I also intend to convert to nikola for maintenance reasons.

by Harald Welte at November 15, 2015 11:00 PM

November 12, 2015


NC393 progress update: 14MPix Sensor Front End is up and running

10398 Sensor Front End with 14MPix MT9F002

10398 Sensor Front End with 14MPix MT9F002

Sensors (ON Semiconductor MT9F002) and blank PCBs arrived in time and so I was able to hand-assemble two 10398 boards and start testing them. I had some minor problems getting data output from the first board, but it turned out to be just my bad soldering of the sensor, the second board worked immediately. To my surprise I did not have any problems with HiSPi decoder that I simulated using the sensor model I wrote myself from the documentation, so the color bar test pattern appeared almost immediately, followed by the real acquired images. I kept most of the sensor settings unmodified from the default values, just selected the correct PLL multiplier, output signal levels (1.8V HiVCM – compatible with the FPGA) and packetized format, the only other registers I had to adjust manually were exposure and color analog gains.

As it was reasonable to expect, sensitivity of the 14MPix sensor is lower than that of the 5MPix MT9P006 – our initial estimate is that it is 4 times lower, but this needs more careful measurements to find out exposure required for pixel saturation with the same illumination. Analog channel gains for both sensors we set slightly higher than minimal ones for the saturation, but such rough measurements could easily miss a factor of 1.5. MT9F002 offers more controls over the signal chain gains, but any (even analog) gain in the chain that boosts signal above the minimal needed for saturation proportionally reduces used “well capacity”, while I expect the Full Well Capacity (FWC) is already not very high for the 1.4μm x1.4 μm pixel sensor. And decrease in the number of electrons stored in a pixel accordingly increases the relative shot noise that reveals itself in the highlight areas. We will need to accurately measure FWC of the MT9F002 and have better sensitivity comparison, including that of the binned mode, but I expect to find out that 5MPix sensor are not obsolete yet and for some applications may still have advantages over the newer sensors.

Image acquired with 5 MPix MT9P006 sensor, 1/2000 s

Image acquired with 5 MPix MT9P006 sensor, 1/2000 s

Image acquired with 14MPix MT9F002 sensor, 1/500 s

Image acquired with 14MPix MT9F002 sensor, 1/500 s

Both sensors used identical f=4.5mm F3.0 lenses, the 5MPix one lens is precisely adjusted during calibration, the lens of the 14MPix sensor is just attached and focused by hand using the lens thread, no tilt correction was performed. Both images are saved at 100% JPEG quality (lossless compression) to eliminate compression artifacts, both used in-camera simple 3×3 demosaic algorithm. The 14 MPix image has visible checkerboard pattern caused by the difference of the 2 green values (green in red row, and green in the blue row). I’ll check that it is not caused by some FPGA code bug I might introduce (save as raw image and do de-bayer on a host computer), but it may also be caused by pixel cross-talk in the sensor. In any case it is possible to compensate or at least significantly reduce in the output data.

MT9F002 transmits data over 5 differential 100Ω pairs: 1 clock pair and 4 data lanes. For the initial tests I used our regular 70mm flex cable used for the parallel interface sensors, and just soldered 5 of 100Ω resistors to the contacts at the camera side end. It did work and I did not even have to do any timing adjustments of the differential lanes. We’ll do such adjustments in the future to get to the centers of the data windows – both the sensor and the FPGA code have provisions for that. The physical 100Ω load resistors were needed as it turned out that Xilinx Zynq has on-chip differential termination only for the 2.5V (or higher) supply voltages on the regular (not “high performance”) I/Os and this application uses 1.8V interface power – I missed this part of documentation and assumed that all the differential inputs have possibility to turn on differential termination. 660 Mbps/lane data rate is not too high and I expect that it will be possible to use short cables with no load resistors at all, adding such resistors to the 10393 board is not an option as it has to work with both serial and parallel sensor interfaces. Simultaneously we designed and placed an order for dedicated flex cables 150mm long, if that will work out we’ll try longer (450mm) controlled impedance cables.

by andrey at November 12, 2015 08:43 PM

November 10, 2015


Infineon BFR740 - 42GHz BJT : weekend die-shot

Infineon BFR740L3RH - bipolar SiGe RF transistor with transition frequency of 42Ghz in a very small leadless package (TSLP-3-9 - 0.6×1×0.31mm).
Die size 305x265 µm.

After metal etch we can see that it's not that simple:

Main active area (scale 1px = 57nm):

November 10, 2015 05:18 AM

November 04, 2015


NC393 progress update: one gigapixel per second (12x faster than NC353)

All the PCBs for the new camera: 10393, 10389 and 10385 are modified to rev “A”, we already received the new boards from the factory and now are waiting for the first production batch to be build. The PCB changes are minor, just moving connectors away from the board edge to simplify mechanical design and improve thermal contact of the heat sink plate to the camera body. Additionally the 10389A got m2 connector instead of the mSATA to accommodate modern SSD.

While waiting for the production we designed a new sensor board (10398) that has exactly the same dimensions, same image sensor format as the current 10338E and so it is compatible with the hardware for the calibrated sensor front ends we use in photogrammetric cameras. The difference is that this MT9F002 is a 14 MPix device and has high-speed serial interface instead of the legacy parallel one. We expect to get the new boards and the sensors next week and will immediately start working with this new hardware.

In preparation for the faster sensors I started to work on the FPGA code to make it ready for the new devices. We planned to use modern sensors with the serial interfaces from the very beginning of the new camera design, so the hardware accommodates up to 8 differential data lanes plus a clock pair in addition to the I²C and several control signals. One obviously required part is the support for Aptina HiSPi (High Speed Serial Pixel) interface that in case of MT9F002 uses 4 differential data lanes, each running at 660 Mbps – in 12-bit mode that corresponds to 220 MPix/s. Until we’ll get the actual sensors I could only simulate receiving of the HiSPi data using the sensor model written ourselves following the interface documentation. I’ll need yet to make sure I understood the documentation correctly and the sensor will produce output similar to what we modeled.

The sensor interface is not the only piece of the code that needed changes, I also had to increase significantly the bandwidth of the FPGA signal processing and to modify the I²C sequencer to support 2-byte register addresses.

Data that FPGA receives from the sensor passes through the several clock domains until it is stored in the system memory as a sequence of compressed JPEG/JP4 frames:

  • Sensor data in each channel enters FPGA at a pixel clock rate, and subsequently passes through vignetting correction/scaling module, gamma conversion module and histogram calculation modules. This chain output is buffered before crossing to the memory clock domain.
  • Multichannel DDR3 memory controller records sensor data in line-scan order and later retrieves it in overlapping (for JPEG) or non-overlapping (for JP4) square tiles.
  • Data tiles retrieved from the external DDR3 memory are sent to the compressor clock domain to be processed with JPEG algorithm. In color JPEG mode compressor bandwidth has to be 1.5 higher than the pixel rate, as for 4:2:0 encoding each 16×16 pixels macroblock generate 6 of the 8×8 image blocks – 4 for Y (intensity) and 2 – for color components. In JP4 mode when the de-mosaic algorithm runs on the host computer the compressor clock rate equals the pixel rate.
  • Last clock domain is 150MHz used by the AXI interface that operates in 64-bit parallel mode and transfers the compressed data to the system memory.

Two of these domains used double clock rate for some of the processing stages – histograms calculation in the pixel clock domain and Huffman encoder/bit stuffer in the compressor. In the previous NC353 camera pixel clock rate was 96MHz (192 MHz for double rate) and compressor rate was 80MHz (160MHz for double rate). The sensor/compressor clock rates difference reflects the fact that the sensor data output is not uniform (it pauses during inactive lines) and the compressor can process the frame at a steady rate.

MT9F002 image sensor has the output pixel rate of 220MPix/s with the average (over the full frame) rate of 198MPix/s. Using double rate clocks (440MHz for the sensor channel and 400MHz for the compressor) would be rather difficult on Zynq, so I needed first to eliminate such clocks in the design. It was possible to implement and test this modification with the existing sensor, and now it is done – four of the camera compressors each run at 250 MHz (even on “-1″, or “slow” speed grade silicon) making it total of 1GPix/sec. It does not need to have 4 separate sensors running simultaneously – a single high speed imager can provide data for all 4 compressors, each processing every 4-th frame as each image is processed independently.

At this time the memory controller will be a bottleneck when running all four MT9F002 sensors simultaneously as it currently provides only 1600MB/s bandwidth that may be marginally sufficient for four MT9F002 sensor channels and 4 compressor channels each requiring 200MB/s (bandwidth overhead is just a few percent). I am sure it will be possible to optimize the memory controller code to run at higher rate to match the compressors. We already identified which parts of the memory controller need to be modified to support 1.5x clock increase to the total of 2400MB/s. And as the production NC393 camera will have higher speed grade SoC there will be an extra 20% performance increase for the same code. That will provide bandwidth sufficient not just to run 4 sensors at full speed and compress the output data, but to do some other image manipulation at the same time.

Compared to the previous Elphel NC353 camera the new NC393 prototype already is tested to have 12x higher compressor bandwidth (4 channels instead of one and 250MPix/s instead of 80MPix/s), we plan to have the actual sensor with a full data processing chain results soon.

by andrey at November 04, 2015 06:41 AM

November 03, 2015

Free Electrons

Linux 4.3 released, Free Electrons contributions inside

Adelie PenguinThe 4.3 kernel release has been released just a few days ago. For details about the big new features in this release, we as usual recommend to read articles covering the merge window: part 1, part 2 and part 3.

According to the KPS statistics, there were 12128 commits in this release, and with 110 patches, Free Electrons is the 20th contributing company. As usual, we did some contributions to this release, though a somewhat smaller number than for previous releases.

Our main contributions this time around:

  • On the support for Atmel ARM SoCs
    • Alexandre Belloni contributed a fairly significant number of cleanups: description of the slow clock in the Device Tree, removal of left-over from platform-data usage in device drivers (no longer needed now that all Atmel ARM platforms use the Device Tree).
    • Boris Brezillon contributed numerous improvements to the atmel-hlcdc, which is the DRM/KMS driver for the modern Atmel ARM SoCs. He added support for several SoCs to the driver (SAMA5D2, SAMA5D4, SAM9x5 and SAM9n12), added PRIME support, and support for the RGB565 and RGB444 output configurations.
    • Maxime Ripard improved the dmaengine drivers for Atmel ARM SoCs (at_hdmac and at_xdmac) to add memset and scatter-gather memset capabilities.
  • On the support for Allwinner ARM SoCs
    • Maxime Ripard converted the SID driver to the newly introduced nvmem framework. Maxime also did some minor pin-muxing and clock related updates.
    • Boris Brezillon fixed some issues in the NAND controller driver.
  • On the support for Marvell EBU ARM SoCs
    • Thomas Petazzoni added the initial support for suspend to RAM on Armada 38x platforms. The support is not fully enabled yet due to remaining stability issues, but most of the code is in place. Thomas also did some minor updates/fixes to the XOR and crypto drivers.
    • Grégory Clement added the initial support for standby, a mode that allows to forcefully put the CPUs in deep-idle mode. For now, it is not different from what cpuidle provides, but in the future, we will progressively enable this mode to shutdown PHY and SERDES lanes to save more power.
  • On the RTC subsystem, Alexandre Belloni did numerous fixes and cleanups to the rx8025 driver, and also a few to the at91sam9 and at91rm9200 drivers.
  • On the common clock framework, Boris Brezillon contributed a change to the ->determinate_rate() operation to fix overflow issues.
  • On the PWM subsystem, Boris Brezillon contributed a number of small improvements/cleanups to the subsystem and some drivers: addition of a pwm_is_enabled() helper, migrate drivers to use the existing helper functions when possible, etc.

The detailed list of our contributions is:

by Thomas Petazzoni at November 03, 2015 03:11 PM

October 28, 2015

Andrew Zonenberg, Silicon Exposed

New GPG key

Hi everyone,

I've been busy lately and haven't had a chance to post much. There will be a pretty good sized series coming up in a month or two (hopefully) on my next-gen FPGA cluster and JTAG stuff but I'm holding off until I have something better to write about.

In the meantime, I've decided that my circa 2009 GPG key is long overdue for replacement so I've issued a new one and am posting the fingerprints in multiple public locations (this being one).

The new key fingerprint is:
859B A7BA DE9C 0BD5 EC01  FF36 3461 7AB9 B31C 7D7C

Verification message signed with my old key:

by Andrew Zonenberg ( at October 28, 2015 01:37 AM

October 27, 2015

Bunnie Studios

Name that Ware October 2015

The Ware for October 2015 is shown below.

…and one of the things that plugs into the slots visible in the photo above as an extra hint…

Thanks again to Nava Whiteford for sharing this ware. Visit his blog and help him get permission from his wife to buy a SEM!

by bunnie at October 27, 2015 07:54 AM

Winner, Name that Ware September 2015

The Ware for September 2015 is a Powerex CM600HA-24H, which met its demise serving as a driver for a tesla coil in the Orage sculpture (good guess 0xbadf00d!). I have a thing for big transistors, and I was very pleased to be gifted this even though it was busted. At $300 a piece, it’s not something I just get up and buy because I want to wear it around as a piece of jewelry; but it did make for a great, if not heavy, necklace. And it was interesting to take apart to see what was inside!

As for the winner, Jimmyjo was the first to guess exactly the model of the IGBT. Congrats, email me for your prize!

by bunnie at October 27, 2015 07:53 AM

October 26, 2015


CHANGJIANG MMBT2222A - npn BJT transistor : weekend die-shot

Unlike OnSemi MMBT2222A CHANGJIANG MMBT2222A has both smaller die size and simpler layout (BC847-like) - which should cause significantly lower hFE on high collector currents.

Die size 234x234 µm.

October 26, 2015 07:26 AM

October 19, 2015


Linear LT1021-5 ±0.05% precision reference : weekend die-shot

Expected heavy duty digital correction? Nope. Just 15 fuses and buried Zener - truly a work of art.
Die size 2354x1364 µm.

October 19, 2015 08:05 AM

October 11, 2015


ST UA741 - the opamp : weekend die-shot

µA741 was the first "usable", widespread solid state opamp, mainly due to integrated capacitor for frequency correction (which we now take for granted in general-purpose opamps). This chip was reimplemented numerous times since 1968, like this ST UA741 in 2001. You can also take a look at historic schematic of µA741 here.

Die size 1073x993 µm.

October 11, 2015 05:32 PM

October 06, 2015

Video Circuits

Experiments using the Rutt-Etra Analog Video Synthesizer and Siegel colorizer, 1975

Video Synthesis Experiments, excerpts from  Edin Velez on vimeo.

A rare example of the Siegel Colorizer in use in this short excerpt.

by Chris ( at October 06, 2015 12:26 PM

September 29, 2015


Google is testing AI to respond to privacy requests

Robotic customer support fails while pretending to be an outsourced human. Last week I searched with Google for Elphel and I got a wrong spelled name, wrong address and a wrong phone number.

Google search for Elphel

Google search for Elphel

A week ago I tried Google Search for our company (usually I only check recent results using last week or last 3 days search) and noticed that on the first result page there is a Street View of my private residence, my home address pointing to a business with the name “El Phel, Inc”.

Yes, when we first registered Elphel in 2001 we used our home address, and even the first $30K check from Google for development of the Google Books camera came to this address, but it was never “El Phel, Inc.” Later wire transfers with payments to us for Google Books cameras as well as Street View ones were coming to a different address – 1405 W. 2200 S., Suite 205, West Valley City, Utah 84119. In 2012 we moved to the new building at 1455 W. 2200 S. as the old place was not big enough for the panoramic camera calibration.

I was not happy to see my house showing as the top result when searching for Elphel, it is both breach of my family privacy and it is making harm to Elphel business. Personally I would not consider a 14-year old company with international customer base a serious one if it is just a one-man home-based business. Sure you can get the similar Street View results for Google itself but it would not come out when you search for “Google”. Neither it would return wrongly spelled business name like “Goo & Gel, Inc.” and a phone number that belongs to a Baptist church in Lehi, Utah (update: they changed the phone number to the one of Elphel).

Google original location

Google original location

Honestly there was some of our fault too, I’ve seen “El Phel” in a local Yellow Pages, but as we do not have a local business I did not pay attention to that – Google was always good at providing relevant information in the search results, extracting actual contact information from the company “Contacts” page directly.

Noticing that Google had lost its edge in providing search results (Bing and Yahoo show relevant data), I first contacted Yellow Pages and asked them to correct information as there is no “El Phel, Inc.” at my home address and that I’m not selling any X-Ray equipment there. They did it very promptly and the probable source of the Google misinformation (“probable” as Google does not provide any links to the source) was gone for good.

I waited for 24 hours hoping that Google will correct the information automatically (post on Elphel blog appears in Google search results in 10 – 19 seconds after I press “Publish” button). Nothing happened – same “El Phel, Inc.” in our house.

So I tried to contact Google. As Google did not provide source of the search result, I tried to follow recommendations to correct information on the map. And the first step was to log in with Google account, since I could not find a way how to contact Google without such account. Yes, I do have one – I used Gmail when Google was our customer, and when I later switched to other provider (I prefer to use only one service per company, and I selected to use Google Search) I did not delete the Gmail account. I found my password and was able to log in.

First I tried to select “Place doesn’t exist” (There is no such company as “El Phel, Inc.” with invalid phone number, and there is no business at my home address).

Auto confirmation came immediately:
From: Google Maps <>
Date: Wed, Sep 23, 2015 at 9:55 AM
Subject: Thanks for the edit to El Phel Inc
To: еlphеl@gmаil.cоm
Thank you
Your edit is being reviewed. Thanks for sharing your knowledge of El Phel Inc.
El Phel Inc
3200 Elmer St, Magna, UT, United States
Your edit
Place doesn't exist
Edited on Sep 23, 2015 · In review
Keep exploring,
The Google Maps team
© 2015 Google Inc. 1600 Amphitheatre Parkway, Mountain View, CA 94043
You've received this confirmation email to update you about your editing activities on Google Maps.

But nothing happened. Two days later I tried with different option (there was no place to provide text entry)
Your edit
Place is private

No results either.

Then I tried to follow the other link after the inappropriate search result – “Are you the business owner?” (I’m not at owner of the non-existing business, but I am an owner of my house). And yes, I had to use my Gmail account again. There were several options how I prefer to be contacted – I selected “by phone”, and shortly after a female-voiced robot called. I do not have a habit of talking to robots, so I did not listen what it said waiting for keywords like: “press 0 to talk to a representative” or “Please stay on the line…”, but it never said anything like this and immediately hang up.

Second time I selected email contact, but it seems to me that the email conversation was with some kind of Google Eliza. This was the first email:

From :
To :
Subject : RE: [7-2344000008781] Google Local Help
Date : Thu, 24 Sep 2015 22:48:47 -0700
Add Label
Greetings from Google.
After investigating, i found that here is an existing page on Google (El Phel Inc-3200 S Elmer St Magna, UT 84044) which according to your email is incorrect information.
Apologies for the inconvenience andrey, however as i can see that you have created a page for El Phel Inc, hence i would first request you to delete the Business page if you aren't running any Business. Also you can report a problem for incorrect information on Maps,Here is an article that would provide you further clarity on how to report a problem or fix the map.
In case you have any questions feel free to reply back on the same email address and i would get back to you.
Google My Business Support.

This robot tried to mimic a kids language (without capitalizing “I” and the first letter of my name), and the level of understanding the matter was below that of a human (it was Google, not me who created that page, I just wanted it to be removed).

I replied as I thought it still might be a human, just tired and overwhelmed by so many privacy-related requests they receive (the email came well after hours in United States).

From : andrey <>
To :
Subject : RE: [7-2344000008781] Google Local Help
Date : Fri, 25 Sep 2015 00:16:21 -0700
Hello Rohit,
I never created such page. I just tried different ways to contact Google to remove this embarrassing link. I did click on "Are you the business owner" (I am the owner of this residence at 3200 S Elmer St Magna, UT 84044) as I hoped that when I'll get the confirmation postcard I'll be able to reply that there is no business at this residential address).
I did try link "how to report a problem or fix the map", but I could not find a relevant method to remove a search result that does not reference external page as a source, and assigns my home residence to the search results of the company, that has a different (than listed) name, is located in a different city (West Valley City, 84119, not in Magna, 84044), and has a different phone number.
So please, can you remove that incorrect information?
Andrey Filippov

Nothing happened either, then on Sunday night (local time) came another email from “Rohit”:

From :
To :
Subject : RE: [7-2344000008781] Google Local Help
Date : Sun, 27 Sep 2015 18:11:44 -0700
Greetings from Google.
I am working on your Business pages and would let you know once get any update.
Please reply back on the same email address in case of any concerns.
Google My Business Support

You may notice that it had the same ticket number, so the sender had all the previous information when replying. For any human capable of using just Google Search it would be not more than 15-30 seconds to find out that their information is incorrect and either remove it completely (as I asked) or replace with some relevant one.

And there is another detail that troubles me. Looking at the time/days when the “Google My Business Support” emails came, and the name “Rohit” it may look like it came from India. While testing a non-human communications Google might hope that correspondents would more likely attribute some inconsistencies in the generated emails to the cultural differences and miss actual software flaws. Does Google count on us being somewhat racists?

Following provided links I was not able to get any response from a human representative, only two robots (phone and email) contacted me. I hope that this post will work better and help to cure this breach of my family privacy and end harm this invalid information provided by a so respected Internet search company causes to the business. I realize that robots will take over more and more of our activities (and we are helping that to happen ourselves), but maybe this process sometimes goes too fast?

by andrey at September 29, 2015 04:25 AM

September 28, 2015

Bunnie Studios

Sex, Circuits & Deep House

Cari with the Institute Blinky Badge at Burning Man 2015. Photo credit: Nagutron.

This year for Burning Man, I built a networked light badge for my theme camp, “The Institute”. Walking in the desert at night with no light is a dangerous proposition – you can get run over by cars, bikes, or twist an ankle tripping over an errant bit of rebar sticking out of the ground. Thus, the outrageous, bordering grotesque, lighting spectacle that Burning Man becomes at night grows out of a central need for safety in the dark. While a pair of dimly flashing red LEDs should be sufficient to ensure one’s safety, anything more subtle than a Las Vegas strip billboard tends to go unnoticed by fast-moving bikers thanks to the LED arms race that has become Burning Man at night.

I wanted to make a bit of lighting that my campmates could use to stay safe – and optionally stay classy by offering a range of more subtle lighting effects. I also wanted the light patterns to be individually unique, allowing easy identification in dark, dusty nights. However, diddling with knobs and code isn’t a very social experience, and few people bring laptops to Burning Man. I wanted to come up with a way for people to craft an identity that was inherently social and interactive. In an act of shameless biomimicry, I copied nature’s most popular protocol for creating individuals – sex.

By adding a peer-to-peer radio in each badge, I was able to implement a protocol for the breeding of lighting patterns via sex.

Some examples of the unique light patterns possible through sex.


When most people think of sex, what they are actually thinking about is sexual intercourse. This is understandable, as technology allows us to have lots of sexual intercourse without actually accomplishing sexual reproduction. Still, the double-entendre of saying “Nice lights! Care to have sex?” is a playful ice breaker for new interactions between camp mates.

Sex, in this case, is used to breed the characteristics of the badge’s light pattern as defined through a virtual genome. Things like the color range, blinking rate, and saturation of the light pattern are mapped into a set of diploid (two copies of each gene) chromosomes (code) (spec). Just as in biological sex, a badge randomly picks one copy of each gene and packages them into a sperm and an egg (every badge is a hermaphrodite, much like plants). A badge’s sperm is transmitted wirelessly to another host badge, where it’s mixed with the host’s egg and a new individual blending traits of both parents is born. The new LED pattern replaces the current pattern on the egg donor’s badge.

Biological genetic traits are often analog, not digital – height or weight are not coded as discrete values in a genome. Instead, observed traits are the result of a complex blending process grounded in the minutiae of metabolic pathways and the efficacy of enzymes resulting from the DNA blueprint and environment. The manifestation of binary situations like recessive vs. dominant is often the result of a lot of gain being applied to an analog signal, thus causing the expressed trait to saturate quickly if it’s expressed at all.

In order to capture the wonderful diversity offered by sex, I implement quantitative traits in the light genome. Instead of having a single bit for each trait, it’s a byte, and there’s an expression function that combines the values from each gene (alleles) to derive a final observed trait (phenotype).

By carefully picking expression functions, I can control how the average population looks. Let’s consider saturation (I used an HSV colorspace, instead of RGB, which makes it much easier to create aesthetically pleasing color combinations). A highly saturated color is vivid and bright. A less saturated color appears pastel, until finally it’s washed out and looks just white or gray (a condition analogous to albinism).

If I want albinism to be rare, and bright colors to be common, the expression function could be a saturating add. Thus, even if one allele (copy of the gene) has a low value, the other copy just needs to be a modest value to result in a bright, vivid coloration. Albinism only occurs when both copies have a fairly low value.

Population makeup when using saturating addition to combine the maternal and paternal saturation values. Albinism – a badge light pattern looking white or gray – happens only when both maternal and paternal values are small. ‘S’ means large saturation, and ‘s’ means little saturation. ‘SS’ and ‘Ss’ pairings of genes leads to saturated colors, while only the ‘ss’ combination leads to a net low saturation (albinism).

On the other hand, if I wanted the average population to look pastel, I can simply take the average of each allele, and take that to be the saturation value. In this case, a bright color can only be achieved in both alleles have a high value. Likewise, an albino can only be achieved if both alleles have a low value.

Population makeup when using averaging to combine the maternal and paternal saturation values. The most common case is a pastel palette, with vivid colors and albinism both suppressed in the population.

For Burning Man, I chose saturating addition as the expression function, to have the population lean toward vivid colors. I implemented other features such as cyclic dimming, hue rotation, and color range using similar techniques.

It’s important when thinking about biological genes to remember that they aren’t like lines of computer code. Rather, they are like the knobs on an analog synth, and the resulting sound depends not just on the position of the knob, but where it is in the signal chain how it interacts with other effects.

Gender and Consent

Beyond genetics, there is a minefield of thorny decisions to be made when implementing the social policies and protocols around sex. What are the gender roles? And what about consent? This is where technology and society collide, making for a fascinating social experiment.

I wanted everyone to have an opportunity to play both gender roles, so I made the badges hermaphroditic, in the sense that everyone can give or receive genetic material. The “maternal” role receives sperm, combines it with an egg derived from the currently displayed light pattern, and replaces its light pattern with a new hybrid of both. The “paternal” role can transmit a sperm derived from the currently displayed pattern. Each badge has the requisite ports to play both roles, and thus everyone can play the role of male or female simply by being either the originator of or responder to a sex request.

This leads us to the question of consent. One fundamental flaw in the biological implementation of sex is the possibility of rape: operating the hardware doesn’t require mutual consent. I find the idea of rape disgusting, even if it’s virtual, so rape is disallowed in my implementation. In other words, it’s impossible for a paternal badge to force a sperm into a maternal badge: male roles are not allowed to have sex without first being asked by a female role. Instead, the person playing the female role must first initiate sex with a target mate. Conversely, female roles can’t steal sperm from male roles; sperm is only generated after explicit consent from the male. Assuming consent is given, a sperm is transmitted to the maternal badge and the protocol is complete. This two-way handshake assures mutual consent.

This non-intuitive and partially role-reversed implementation of sex lead to users asking support questions akin to “I’m trying to have sex, but why am I constantly being denied?” and my response was – well, did you ask your potential mate if it was okay to have sex first? Ah! Consent. The very important but often overlooked step before sex. It’s a socially awkward question, but with some practice it really does become more natural and easy to ask.

Some users were enthusiastic early adopters of explicit consent, while others were less comfortable with the question. It was interesting to see the ways straight men would ask other straight men for sex – they would ask for “ahem, blinky sex” – and anecdotally women seemed more comfortable and natural asking to have sex (regardless of the gender of the target user).

As an additional social experiment, I introduced a “rare” trait (pegged at ~3% of a randomly generated population) consisting of a single bright white pixel that cycles around the LED ring. I wanted to see if campmates would take note and breed for the rare trait simply because it’s rare. At the end of the week, more people were expressing the rare phenotype than at the beginning, so presumably some selective breeding for the trait did happen.

In the end, I felt that having sex to breed interesting light patterns was a lot more fun for everyone than tweaking knobs and sliders in a UI. Also, because traits are inherited through sexual reproduction, by the end of the event one started to see families of badges gaining similar traits, but thanks to the randomness inherent in sex you could still tell individuals apart in the dark by their light patterns.

Finding Friends

Implementing sex requires a peer-to-peer radio. So why not also use the radio to help people locate nearby friends? Seems like a good idea on the outside, but the design of this system is a careful balance between creating a general awareness of friends in the area vs. creating a messaging client.

Personally, one of the big draws of going to Burning Man is the ability to unplug from the Internet and live in an environment of intimate immediacy – if you’re physically present, you get 100% of my attention; otherwise, all bets are off. Email, SMS, IRC, and other media for interaction (at least, I hear there are others, but I don’t use them…) are great for networking and facilitating business, but they detract from focusing on the here and now. For me there’s something ironic about seeing a couple in a fancy restaurant, both hopelessly lost staring deeply into their smartphones instead of each other’s eyes. Being able to set an auto-responder for two weeks which states that your email will never be read is pretty liberating, and allows me to open my mind up to trains of thought that can take days to complete. Thus, I really wanted to avoid turning the badge into a chat client, or any sort of communication medium that sets any expectation of reading messages and responding in a timely fashion.

On the other hand, meeting up with friends at Burning Man is terribly hard. It’s life before the cell phone – if you’re old enough to remember that. Without a cell phone, you have a choice between enjoying the music, stalking around the venue to find friends, or dancing in one spot all night long so you’re findable. Simply knowing if my friends have finally showed up is a big help; if they haven’t arrived yet, I can get lost in the music and check out the sound in various parts of the venue until they arrive.

Thus, I designed a very simple protocol which will only reveal if your friends are nearby, and nothing else. Every badge emits a broadcast ping every couple of seconds. Ideally, I’d use an RSSI (receive signal strength indicator) to figure out how far the ping is, but due to a quirk of the radio hardware I was unable to get a reliable RSSI reading. Instead, every badge would listen for the pings, and decrement the ping count at a slightly slower average rate than the ping broadcast. Thus, badges solidly within radio range would run up a ping count, and as people got farther and farther away, the ping count would decrease as pings gradually get lost in the noise.

Friend finding UI in action. In this case, three other badges are nearby, SpacyRedPhage, hap, and happybunnie:-). SpacyRedPhage is well within range of the radio, and the other two are farther away.

The system worked surprisingly well. The reliable range of the radio worked out to be about 200m in practice, which is about the sound field of a major venue at Burning Man. It was very handy for figuring out if my friends had left already for the night, or if they were still prepping at camp; and there was one memorable reunion at sunrise where a group of my camp mates drove our beloved art car, Dr. Brainlove, to Robot Heart and I was able to quickly find them thanks to my badge registering a massive amount of pings as they drove into range.

Hardware Details

I’m not so lucky that I get to design such a complex piece of hardware exclusively for a pursuit as whimsical as Burning Man. Rather, this badge is a proof-of concept of a larger effort to develop a new open-source platform for networked embedded computers (please don’t call it IoT) backed by a rapid deployment supply chain. Our codename for the platform is Orchard.

The Burning Man badge was our first end-to-end test of Orchard’s “supply chain as a service” concept. The core reference platform is fairly well-documented here, and as you can see looks nothing like the final badge.

Bottom: orchard reference design; top: orchard variant as customized for Burning Man.

However, the only difference at a schematic level between the reference platform and the badge is the addition of 14 extra RGB LEDs, the removal of the BLE radio, and redesign of the captouch electrode pattern. Because the BOM of the badge is a strict subset of the reference design, we were able to go from a couple prototypes in advance of a private Crowd Supply campaign to 85 units delivered at the door of camp mates in about 2.5 months – and the latency of shipping units from China to front doors in the US accounts for one full month of that time.

The badge sports an interactive captouch surface, an OLED display, 900MHz ISM band peer-to-peer radio, microphone, accelerometer, and more!

If you’re curious, you can view documentation about the Orchard platform here, and discuss it at the Kosagi forum.


As an engineer, my “default” existence is confined on four sides by cost, schedule, quality, and specs, with a sprinkling of legal, tax, and regulatory constraints on top. It’s pretty easy to lose your creative spark when every day is spent threading the needle of profit and loss.

Even though the implementation of Burning Man’s principles of decommodification and gifting is far from perfect, it’s sufficient to enable me to loosen the shackles of my daily existence and play with technology as a medium for enhancing human interactions, and not simply as a means for profit. In other words, thanks to the values of the community, I’m empowered and supported to build stuff that wouldn’t make sense for corporate shareholders, but might improve the experiences of my closest friends. I think this ability to leave daily existence behind for a couple weeks is important for staying balanced and maintaining perspective, because at least for me maximizing profit is rarely the same as maximizing happiness. After all, a warm smile and a heartfelt hug is priceless.

by bunnie at September 28, 2015 10:16 AM

September 26, 2015


Diodes BC847BS - matched BJT pair : weekend die-shot

Diodes Incorporated BC847BS - pair of npn transistors with matched hFE. Internally it has 2 separate dies.
Die size 306x306 µm.

Second die:

Lithography repeatability is definitely better than this. Parameter matching is likely achieved by using adjacent dies from the wafer. 2 dies are used because one cannot place 2 BC847 transistors on the same die without significant changes to the technology (and it won't be BC847 anymore) - die bulk is transistor terminal.

Difference between the dies. Top metal is quite non-uniform optically (as usual) over the area, but this is unlikely to cause any electrical characteristics impact. Would be interesting to make similar difference photo for non-matched transistors.

September 26, 2015 01:22 PM

September 25, 2015

Free Electrons

Free Electrons at the Linux Kernel Summit 2015

Kernel Summit 2012 in San DiegoThe Linux Kernel Summit is, as Wikipedia says, an annual gathering of the top Linux kernel developers, and is an invitation-only event.

In 2012 and 2013, several Free Electrons engineers have been invited and participated to a sub-event of the Linux Kernel Summit, the “ARM mini-kernel summit”, which was more specifically focused on ARM related developments in the kernel. Gregory Clement and Thomas Petazzoni went to the event in 2012 in San Diego (United States) and in 2013, Maxime Ripard, Gregory Clement, Alexandre Belloni and Thomas Petazzoni participated to the ARM mini-kernel summit in Edinburgh (UK).

This year, Thomas Petazzoni has been invited to the Linux Kernel Summit, which will take place late October in Seoul (South Korea). We’re happy to see that our continuous contributions to the Linux Kernel are recognized and allow us to participate to such an invitation-only event. For us, participating to the Linux Kernel Summit is an excellent way of keeping up-to-date with the latest Linux kernel developments, and also where needed, give our feedback from our experience working in the embedded industry with several SoC, board and system vendors.

by Thomas Petazzoni at September 25, 2015 11:26 AM

September 24, 2015


TL431 - adjustable shunt regulator : weekend die-shot

TL431 is another adjustable shunt regulator often used in linear supplies with external power transistor.
Die size 592x549 µm.

September 24, 2015 01:34 PM

September 18, 2015


NC393 progress update: all hardware is operational

10393 with 4 image sensors

10393 with 4 image sensors

Finally all the parts of the NC393 prototype are tested and we now can make the circuit diagram, parts list and PCB layout of this board public. About the half of the board components were tested immediately when the prototype was built – it was almost two years ago – those tests did not require any FPGA code, just the initial software that was mostly already available from the distributions for the other boards based on the same Xilinx Zynq SoC. The only missing parts were the GPL-licensed initial bootloader and a few device drivers.

Implementation of the 16-channel DDR3 memory controller

Getting to the next part – testing of the FPGA-controlled DDR3 memory took us longer: the overall concept and the physical layer were implemented in June 2014, timing calibration software and application modules for image image recording and retrieval were implemented in the spring of 2015.

Initial image acquisition and compression

When the memory was proved operational what remained untested on the board were the sensor connections and the high speed serial links for SATA. I decided not to make any temporary modules just to check the sensor physical connections but to port the complete functionality of the image acquisition, processing and compression of the existing NC353 camera (just at a higher clock rate and multiple channels instead of a single one) and then test the physical operation together with all the code.

Sensor acquisition channels: From the sensor interface to the video memory buffer

The image acquisition code was ported (or re-written) in June, 2015. This code includes:

  • Sensor physical interface – currently for the existing 10338 12-bit parallel sensor front ends, with provisions for the up to 8-lanes + clock high speed serial sensors to be added. It is also planned to bond together multiple sensor channels to interface single large/high speed sensor
  • Data and clock synchronization, flexible phase adjustment to recover image data and frame format for different camera configurations, including sensor multiplexers such as the 10359 board
  • Correction of the lens vignetting and fine-step scaling of the pixel values, individual for each of the multiplexed sensors and color channel
  • Programmable gamma-conversion of the image data
  • Writing image data to the DDR3 image buffer memory using one or several frame buffers per channel, both 8bpp and 16bpp (raw image data, bypassing gamma-conversion) formats are supported
  • Calculation of the histograms, individual for each color component and multiplexed sensor
  • Histograms multiplexer and AXI interface to automatically transfer histogram data to the system memory
  • I²c sequencer controls image sensors over i²c interface by applying software-provided register changes when the designated frame starts, commands can be scheduled up to 14 frames in advance
  • Command frame sequencer (one per each sensor channel) schedules and applies system register writes (such as to control compressors) synchronously to the sensors frames, commands can be scheduled up to 14 frames in advance

JPEG/JP4 compression functionality

Image compressors get the input data from the external video buffer memory organized as 16×16 pixel macroblocks, in the case of color JPEG images larger overlapping tiles of 18×18 (or 20×20) pixels are needed to interpolate “missing” colors from the input Bayer mosaic input. As all the data goes through the buffer there is no strict requirement to have the same number of compressor and image acquisition modules, but the initial implementation uses 1:1 ratio and there are 4 identical compressor modules instantiated in the design. The compressor output data is multiplexed between the channels and then transferred to the system memory using 1 or 2 of Xilinx Zynq AXI HP interfaces.

This portion of the code is also based on the earlier design used in the existing NC353 camera (some modules are reusing code from as early as 2002), the new part of the code was dealing with a flexible memory access, older cameras firmware used hard-wired 20×20 pixel tiles format. Current code contains four identical compressor channels providing JPEG/JP4 compression of the data stored in the dedicated DDR3 video buffer memory and then transferring result to the system memory circular buffers over one or two of the Xilinx Zynq four AXI HP channels. Other camera applications that use sensor data for realtime processing rather than transferring all the image data to the host may reduce number of the compressors. It is also possible to use multiple compressors to work on a single high resolution/high frame rate sensor data stream.

Single compressor channel contains:

  • Macroblock buffer interface requests 32×18 or 32×16 pixel tiles from the memory and provides 18×18 overlapping macroblocks for JPEG or 16×16 non-overlapping macroblocks for JP4 using 4KB memory buffer. This buffer eliminates the need to re-read horizontally overlapping pixels when processing consecutive macroblocks
  • Pixel buffer interface retrieves data from the memory buffer providing sequential pixel stream of 18×18 (16×16) each macroblock
  • Color conversion module selects one of the sub-modules : csconvert18a, csconvert_mono, csconvert_jp4 or csconvertjp4_diff to convert possibly overlapping Bayer mosaic tiles to a sequence of 8×8 blocks for 2-d DCT transform
  • Average value extractor calculates average value in each 8×8 block, subtracts it before DCT and restores after – that reduces data width in DCT processing module
  • xdct393 performs 2-d DCT for each 8×8 pixel block
  • Quantizer re-orders each block DCT components from the scan-line to zigzag sequence and quantizes them using software-calculated and loaded tables. This is the only lossy stage of the JPEG algorithm, when the compression quality is set to 100% all the coefficients are set to 1 and the conversion is lossless
  • Focus sharpness module accumulates amount of high-frequency components to estimate image sharpness over specified window to facilitate (auto) focusing. It also allows to replace on-the-fly average block value of the image with amount of the high frequency components in the same blog, providing visual indication of the focus sharpness
  • RLL encoder converts the continuous 64 samples/per block data stream in to RLL-encoded data bursts
  • Huffman encoder uses software-generated tables to provide additional lossless compression of the RLL-encoded data. This module (together with the next one) runs and double pixel clock rate and has an input FIFO between the clock domains
  • Bit stuffer consolidates variable length codes coming out from the Huffman encoder into fixed-width words, escaping each 0xff byte (these bytes have special meaning in JPEG stream) by inserting 0×00 right after it. It additionally provides image timestamp and length in bytes after the end of the compressed data before padding the data to multiple of 32-byte chunks, this metadata has fixed offset before the 32-byte aligned data end
  • Compressor output FIFO converts 16-bit wide data from the bit stuffer module received at a double compressor clock rate (currently 200MHz) and provides 64-bit wide output at the maximal clock rate (150MHz) for AXI HP port of Xilinx Zynq, it also provides buffering when several compressor channels share the same AXI HP channel

Another module – 4:1 compressor multiplexer is shared between multiple compressor channels. It is possible (defined by Verilog parameters) to use either single multiplexer with one AXI HP port (SAXIHP1) and 4 compressor inputs (4:1), or two of these modules interfacing two AXI HP channels (SAXIHP1 and SAXIHP2), reducing number of concurrent inputs of each multiplexer to just 2 (2 × 2:1). Multiplexers use fair arbitration policy and consolidate AXI bursts to full 16×64bits when possible. Status registers provide image data pointers for last write and last frame start, each as sent to AXI and after confirmation using AXI write response channel.

Porting remaining FPGA functionality to the new camera

Additional modules where ported to complete the existing NC353 functionality:

  • Camera real time clock that provides current time with 1 microsecond resolution to various modules. It has accumulator-based correction circuitry to compensate for crystal oscillator frequency variations
  • Inter-camera synchronization module generates and/or receives synchronization signals between multiple camera modules or other devices. When used between the cameras, each synchronization pulse has a timestamp information attached in a serialized form, so multiple synchronized cameras have all the simultaneous images metadata contain the same time code generated by the “master” camera
  • Event logger records data from multiple sources, such as GPS, IMU, image acquisition events and external signal channel (like a vehicle wheel rotation sensor)

Simulating the full codebase

All that code was written (either new or modified from the existing NC353 FPGA project by the end of July, 2015 and then the most fun began. First I used the proven NC353 code to simulate (using Icarus Verilog + GtkWave) with the same input data as the one provided to the new x393 code, following the signal chains and making sure that each checkpoint data matched. That was especially useful when debugging JPEG compressor, as the intermediate data is difficult to follow. When I was developing the first JPEG compressor in 2002 I had to save output data from the various processing stages and compare it to the software compression output of the same image data from the similar stages. Having working implementation helped a lot and in 3 weeks I was able to match the output from all the processing stages described above except the event logger that I did not verify yet.

Testing the hardware

Then it was the time for translating the Verilog test fixture code to the Python programs running on the target hardware extending the code developed earlier for the memory controller. The code is able to parse Verilog parameter definition files – that simplified synchronization of the Verilog and Python code. It would be nice to use something like Cocotb in the future and completely get rid of the Verilog to Python manual translation.

As I am designing code for the reconfigurable FPGA (not for ASIC) my usual strategy is not to get high simulation coverage, but to simulate to a “barely working” stage, then use the actual hardware (that runs tens of millions times faster than the simulator), detect the problems and then try to achieve the same condition with the simulation. But when I just started to run the hardware I realized that there is too little I can get about the current state of the hardware. Remembering about the mess of the temporary debug code I had in the previous projects and the inability of the synthesis tool to directly access the qualified names of the signals inside sub-modules, I implemented rather simple debug infrastructure that uses a single register ring (like a simplified JTAG) through all the modules to debug and a matching Python code that allows access to individual bit fields of the ring. Design includes a single debug_master and debug_slave modules in each of the design module instances that needs debugging (and the modules above – up to the top one). By the time the camera was able to generate correct images the total debug ring consisted of almost a hundred of the 32-bit registers, when I later disabled this debug functionality by commenting out a single `define DEBUB_RING macro it recovered almost 5% of the device slices. The program output looks like:
x393 +0.001s--> print_debug 0x38 0x3e
038.00: compressors393_i.jp_channel0_i.debug_fifo_in [32] = 0x6e280 (451200)
039.00: compressors393_i.jp_channel0_i.debug_fifo_out [28] = 0x1b8a0 (112800)
039.1c: compressors393_i.jp_channel0_i.dbg_block_mem_ra [ 3] = 0x3 (3)
039.1f: compressors393_i.jp_channel0_i.dbg_comp_lastinmbo [ 1] = 0x1 (1)
03a.00: compressors393_i.jp_channel0_i.pages_requested [16] = 0x26c2 (9922)
03a.10: compressors393_i.jp_channel0_i.pages_got [16] = 0x26c2 (9922)
03b.00: compressors393_i.jp_channel0_i.pre_start_cntr [16] = 0x4c92 (19602)
03b.10: compressors393_i.jp_channel0_i.pre_end_cntr [16] = 0x4c92 (19602)
03c.00: compressors393_i.jp_channel0_i.page_requests [16] = 0x4c92 (19602)
03c.10: compressors393_i.jp_channel0_i.pages_needed [16] = 0x26c2 (9922)
03d.00: compressors393_i.jp_channel0_i.dbg_stb_cntr [16] = 0xcb6c (52076)
03d.10: compressors393_i.jp_channel0_i.dbg_zds_cntr [16] = 0xcb6c (52076)
03e.00: compressors393_i.jp_channel0_i.dbg_block_mem_wa [ 3] = 0x4 (4)
03e.03: compressors393_i.jp_channel0_i.dbg_block_mem_wa_save [ 3] = 0x0 (0)

Acquiring the first images

All the problems I encountered while trying to make hardware work turned out to be reproducible (but no always easy) with the simulation and the next 3 weeks I was eliminating then one by one. When I’ve got to the 51-st version of the FPGA bitstream file (there were several more when I forgot to increment version number) camera started to produce consistently valid JPEG files.

First 4-sensor image acquired with NC393 camera

First 4-sensor image acquired with NC393 camera

At that point I replaced a single sensor front end with no lens attached (just half of the input sensor window was covered with a tape to produce a blurry shadow in the images) with four complete SFE with lenses simultaneously using a piece of Eyesis4π hardware to point the individual sensors at the 45° angles (in portrait mode) covering 180°×60° FOV combined – it resulted in the images shown above. Sensor color gains are not calibrated (so there is visible color mismatch) and the images are not stitched together (just placed side-by-side) but i consider it to be a significant milestone in the NC393 camera development.

SATA controller status

Almost at the same time Alexey who is working on SATA controller for the camera achieved an important milestone too. His code running in Xilinx Zynq was able to negotiate and establish link with an mSATA SSD connected to the NC393 prototype. There is still a fair amount of design work ahead until we’ll be able to use this controller with the camera, but at least the hardware operation of this part of the design is verified now too.

What is next

Having all the hardware on the 10393 verified we are now able to implement minor improvements and corrections to the 3 existing boards of the NC393 camera:

  • 10393 itself
  • 10389 – extension board with mSATA SSD, eSATA/USB combo connector, micro-USB and synchronization I/O
  • 10385 – power supply board

And then make the first batch of the new cameras that will be available for other developers and customers.
We also plane to make a new sensor board with On Semiconductor (former Aptina, former Micron) MT9F002 – 14MPix sensor with the same 1/2.3″ image format as the MT9P006 used with the current NC353 cameras. This 12-bit sensor will allow us to try multi-lane high speed serial interface keeping the same physical dimension of the sensor board and use the same lenses as we use now.

by andrey at September 18, 2015 05:38 PM

September 15, 2015

Free Electrons

2015 Q2 newsletter

This article was published on our quarterly newsletter.

Free Electrons working on the $9 computer!

NextThing Co, a company based in Oakland, California, made the news in the last months by starting a successful crowdfunding campaign to develop a $9 computer! Much like the Raspberry Pi, this $9 computer called C.H.I.P is based on an ARM processor and runs a Linux operating system.

More specifically, at the core of this computer is an Allwinner ARM processor, and Free Electrons engineer Maxime Ripard turns out to be the official Linux kernel maintainer for the support of this processor family. Since NextThing Co. is firmly engaged in having software support for the C.H.I.P that is as open-source as possible, they decided to contract us to do a lot of work in the official Linux kernel to improve the support for the Allwinner processor they are using.

Thanks to this project, some of the big missing features in the support of Allwinner processors in the official Linux kernel will be implemented in the coming months, so you can expect to see a lot of contributions from Free Electrons on such topics in the future. We’re really excited to be part of the $9 computer adventure!

See our blog post for more details.

Kernel contributions

As usual, we continue to contribute significantly to the Linux kernel, with 100 to 200 or more patches from Free Electrons engineers merged at each kernel release. Our focus continues to be on support for various ARM processor families.

  • In Linux 3.19, we had 205 patches merged, making Free Electrons the 13th contributing company in number of patches. See details on our 3.19 contributions.
  • In Linux 4.0, we had 252 patches merged, making Free Electrons the 6th contributing company in number of patches. See details on our 4.0 contributions.
  • In Linux 4.1, we had 118 patches merged, a smaller amount of contributions. See details.

Some major highlights of our contributions:

  • In Linux 4.0, we merged a complete driver for the display controller of the latest Atmel ARM processors. This DRM/KMS driver, written by Boris Brezillon, allows using the display of those processors with the mainline kernel. It was the last big feature missing in the mainline kernel for the Atmel processors.
  • Our engineer Alexandre Belloni was appointed as the co-maintainer of the RTC subsystem, and also as the co-maintainer of the support for the Atmel processors. As the maintainer of the RTC subsystem, Alexandre is now sending pull requests directly to Linus Torvalds!
  • In Linux 4.1, we completed the conversion of Atmel platform support to the multiplatform paradigm. And we also added support for the latest Armada 39x processor from Marvell.

New training session on Buildroot

Last year, we developed and released a new 3-day training session on the Yocto Project and OpenEmbedded. This year, we are happy to release a new 3-day training course covering the Buildroot embedded Linux build system.

Buildroot is very popular alternate solution to the Yocto Project to build embedded Linux systems, thanks to its ease of use and Buildroot, with Free Electrons CTO Thomas Petazzoni being one of the top contributors to the project.

Over the 3 days of this training course, you will learn how to use Buildroot, how to add more packages, how to customize the filesystem generated by Buildroot, how Buildroot works internally and much more!

Check out our agenda, slides, and practical lab instructions for more details.

This training session, taught by Thomas, can be delivered anywhere in the world at your location, or individual participants can attend to our first public training session on this topic in Toulouse (France) in November 2015.

Recent projects

Besides our visible contributions, we also work on a number of projects for customer-specific platforms.

For a French customer making a custom i.MX6 base-board using a System-on-Module from SECO, we ported a recent mainline U-Boot, a 3.10 Freescale kernel, and provided a Buildroot based system with Qt5 and OpenGL acceleration to allow the customer to develop its own applications. Among other things, we had to add support for communication with an FPGA over SPI, and wrote a userspace tool to reprogram this FPGA over SPI.

This project lead to a few U-Boot contributions (support for the SECO module):

And a few Buildroot contributions as well:

For a US based customer, developed a prototype system running on a Nitrogen 6x platform, built by Buildroot, and running the SuperCollider application for audio synthesis.

For a French customer, developed a Yocto Project based BSP for a custom i.MX6 platform. The work involved kernel development to adapt to the hardware and run some Qt5 application under X11.


Like we do every year, we participated to the Embedded Linux Conference in San Jose, California: seven engineers from Free Electrons attended the conference.

The videos and slides of the three talks we gave have been posted:

  • The DMAengine subsystem, by Maxime Ripard (slides, video).
  • The Device Tree as a stable ABI: a fairy tale?, by Thomas Petazzoni (slides, video).
  • MLC/TLC NAND support: (new ?) challenges for the MTD/NAND subsystem, by Boris Brezillon (slides, video)

For more details about our participation to ELC, see our blob post.

We have submitted several talks for the upcoming Embedded Linux Conference Europe, which will take place early October in Dublin, Ireland.

Upcoming public training sessions

In addition to offer our training courses on-site everywhere in the world (we recently delivered training in the United States, Israel, India and Mexico!), we also offer public training sessions opened to individuals. Our next public training sessions are:

Embedded Linux training
October 12-16, in Avignon (France), in English
November 23-27, in Toulouse (France), in French
Embedded Linux kernel and driver development training
July 20-24, in Avignon (France), in English
November 16-20, in Toulouse (France), in French
Embedded Linux development with Buildroot training
November 30-December 2, in Toulouse (France), in English
Yocto Project and OpenEmbedded development training
October 13-15, in Toulouse (France), in English
Android system development training
December 7-10, in Toulouse (France), in English


At Free Electrons, we are starting to get more and more requests for very cool projects. As it can be very frustrating to turn down very interesting opportunities, we have decided to look for new engineers to join our technical team.

Therefore, if you are a junior engineer showing a real interest in embedded Linux and open-source projects, or an experienced engineer with existing visible contributions and embedded Linux knowledge, do not hesitate to contact us.

See more details about our job openings.

by Michael Opdenacker at September 15, 2015 08:18 AM

September 13, 2015

Bunnie Studios

Name that Ware, September 2015

The Ware for September 2015 is shown below.

This is a little something I was gifted at Burning Man this year. I wore it around my neck for a week and then brought it back to my lab in Singapore and tore it apart. Obviously, it suffered some kind of severe trauma. I’m particularly enamored with the way the silicon melted — instead of revealing crystalline facets at the former wirebond pads, a smooth, remodeled and rather amorphous surface is revealed with rivulets of silicon radiating from the craters. Now that’s hot!

by bunnie at September 13, 2015 09:05 AM

Winner, Name that Ware August 2015

Last month’s ware is a controller board for a cutting machine, made by Polar-Mohr. The specific part number printed on the board is Polar SK 020162, which I’m guessing corresponds with this machine. Henry Valta pretty much nailed it, by guessing it as a Baum SK66 cutting circuit board. I’m not quite sure what the relationship is between Baumfolder and Polar-Mohr corporation, but it seems to be close enough that they share controller boards. Congrats, email me for your prize!

I do have to give a shout-out to zebonaut for noting the use of “V” designators for discrete semiconductors and linking it to German/DIN-compliant origins. I’m pretty good at picking out PCBs made by Japanese manufacturers, and this little factoid will now help me identify PCBs of EU/German design origin.

by bunnie at September 13, 2015 09:04 AM

September 12, 2015

Free Electrons

The quest for Linux friendly embedded board makers

Beagle Bone Black boardWe used to keep a list of Linux friendly embedded board makers. When this page was created in the mid 2000s, this page was easy to maintain. Though more and more products were created with Linux, it was still difficult to find good hardware platforms that were supported by Linux.

So, to help community members and system makers selecting hardware for their embedded Linux projects, we compiled a first selection of board makers that were meeting the below criteria:

  • Offering attractive and competitive products
  • At least one product supported Free Software operating systems (such as Linux, eCos and NetBSD.
  • At least one product meeting the above requirements, with a public price (without having to register), and still available on the market.
  • Specifications and documentation directly available on the website (no registration required). Engineers like to study their options on their own without having to share their contact details with salespeople who would then chase them through their entire life, trying to sell inappropriate products to them.
  • Website with an English version.

In the beginning, this was enough to reduce the list to 10-20 entries. However, as Linux continued to increase in popularity, and as hardware platform makers started to understand the value of transparent pricing and technical documentation, the criteria were no longer sufficient to keep the list manageable.

Therefore, we added another prerequisite: at least one product supported (at least partially) in the official version of the corresponding Free Software operating system kernel. This was a rather strong requirement at first, but only such products bring a guarantee for long term community support, making it much easier to develop and maintain embedded systems. Compare this with hardware supporting only a very old and heavily patched Linux kernel, for example, which software can only be maintained by its original developers. This also reveals the ability of the hardware vendor to work with the community and share technical information with its users and developers.

Then, with the development of low-cost community boards, and chip manufacturers efforts to support their hardware in the mainline Linux kernel, the list again became difficult to maintain.

The next prerequisite we could add is the availability as Open-source hardware, allowing customers to modify the hardware according to their needs. Of course, hardware files should be available without registration.

However, rather than keeping our own list, the best is to contribute to Wikipedia, which has a dedicated page on Open-Source computing hardware. At least, all the boards we could find are listed there, after adding a few.

Don’t hesitate to post comments to this page to share information about hardware which could be worth adding to this Wikipedia page!

Anyway, the good news is that Linux and Open-Source friendly hardware is now easier and easier to find than it was about 10 years back. Just have a preference for hardware that is supported in the mainline Linux kernel sources, or at least from a maker with earlier products which are already supported. A git grep -i command in the sources will help.

by Michael Opdenacker at September 12, 2015 05:21 PM

September 06, 2015

Video Circuits

DIY video VCO

Here are some shots of early XR2206 based video VCO experiments, the important thing with video is getting sync pulses from your SPG in to a format that your oscillator circuit wants to sync to, some are fine with narrow pulses some want a nice clean saw wave or for the pulse to hit a certain voltage threshold. This means if you don't have the skills to attempt at modifying whatever SPG or VCO you have chosen you will need sync conditioning circuits to sit in between getting the two to talk nicely.

by Chris ( at September 06, 2015 09:43 AM

September 01, 2015

Free Electrons

Linux 4.1 released, Free Electrons 17th contributing company

TuxLinus Torvalds recently released the 4.1 Linux kernel, for which gave a good description of the major new features: 4.1 Merge window, part 1, 4.1 Merge window, part 2, The 4.1 merge window closes.

As usual, Free Electrons engineers contributed to the Linux kernel during this development cycle, though this time with a smaller number of patches: we contributed 118 patches. This time around, Free Electrons is the 17th company contributing to this kernel release, by number of patches.

Our major contributions this time around have been:

  • On support for Atmel platforms
    • Alexandre Belloni did a good number of improvements to Atmel SoC support: converting some remaining SoCs to the SoC detection infrastructure, cleaning up the timer driver to use a syscon/regmap, removing a lot of unused headers in arch/arm/mach-at91/, etc. The final and very important change is that the AT91 ARM platform is now part of the multiplatform mechanism: you can build a single zImage for ARMv5 or for ARMv7 which will include support for the ARMv5 or ARMv7 Atmel platforms.
    • Boris Brezillon improved the Atmel DRM/KMS driver for the display controller by switching to atomic mode-setting. He also added Device Tree definitions for the Atmel display controller on Atmel SAMA5D3 and Atmel SAMA5D4.
  • On support for Marvell EBU platforms
    • Ezequiel Garcia enabled the Performance Monitor Unit on Armada 375 and Armada 38x, which allows to use perf on those platforms.
    • Gregory Clement did a number of fixes and minor improvements to support for Marvell EBU platforms.
    • Maxime Ripard enabled the Performance Monitoring Unit on Armada 370/XP, enabling the use of perf on these platforms. He also improved support for the Armada 385 AP board by enabling NAND and USB3 support.
    • Thomas Petazzoni added initial support for the new Marvell Armada 39x platform (clock driver, pinctrl driver, Device Tree). He did some cleanup and fixes in many Device Tree of Marvell EBU platforms and added suspend/resume support in the PCI and pinctrl drivers for these platforms.
  • Other contributions
    • As we posted recently, Alexandre Belloni also became in this release cycle a co-maintainer for the RTC subsystem.
    • Alexandre Belloni added bq27510 support for the bq27x00_battery driver.
    • Maxime Ripard did some small contributions to the dmaengine subsystem, improved the of_touchscreen code and the edt-ft5x06 touchscreen driver, and did some cleanup in the Allwinner sun5i clocksource driver.

For the upcoming 4.2 version, we have 198 patches in linux-next, of which 191 have already been pulled by Linus as part of the 4.2 merge window.

Our complete list of contributions follows:

by Thomas Petazzoni at September 01, 2015 12:52 PM

August 31, 2015

Free Electrons

Linux 4.2 released, Free Electrons contributions inside

Adelie Penguin
Linus Torvalds has released last sunday the 4.2 release of the Linux kernel. covered the merge window of this 4.2 release cycle in 3 parts (part 1, part 2 and part 3), giving a lot of details about the new features and important changes.

In a more recent article, published some statistics about the 4.2 development cycle. In those statistics, Free Electrons appears as the 10th contributing company by number of patches with 203 patches integrated, and Free Electrons engineer Maxime Ripard is in the list of most active developers by changed lines, with 6000+ lines changed. See also for more kernel contribution statistics.

This time around, the most important contributions of Free Electrons where:

  • Support for Atmel ARM processors:
    • The effort to clean-up the arch/arm/mach-at91/ continued, now that the conversion to the Device Tree and multiplatform is completed. This was mainly done by Alexandre Belloni.
    • Support for the ACME Systems Arietta G25 was added by Alexandre Belloni.
    • Support for the RTC on at91sam9rlek was also added by Alexandre Belloni.
    • Significant improvements were brought to the dmaengine xdmac and hdmac drivers (used on Atmel SAMA5D3 and SAMA5D4), bringing interleaved support, memset support, and better performance for certain use cases. This was done by Maxime Ripard.
  • Support for Marvell Berlin ARM processors:
    • In preparation to the addition of a driver for the ADC, an important refactoring of the reset, clock and pinctrl driver was done by using a regmap and the syscon mechanism to more easily share the common registers used by those drivers. Worked done by Antoine Ténart.
    • An IIO driver for the ADC was contributed, which relies on the syscon and regmap mentioned above, as the ADC uses registers that are mixed with the clock, reset and pinctrl ones.
    • The Device Tree files were relicensed under GPLv2 and X11 licenses.
  • Support for Marvell EBU ARM processors:
    • A completely new driver for the CESA cryptographic engine was contributed by Boris Brezillon. This driver aims at replacing the old mv_cesa drivers, by supporting the newer features of the cryptographic engine available in recent Marvell EBU SoCs (DMA, new ciphers, etc.). The driver is backward compatible with the older processors, so it will be a full replacement for mv_cesa.
    • A big cleanup/verification work was done on the pinctrl drivers for Armada 370, 375, 38x, 39x and XP, leading to a number of fixes to pin definitions. This was done by Thomas Petazzoni.
    • Various fixes were made (suspend/resume improvements, big endian usage, SPI, etc.).
  • Support for the Allwinner ARM processors:
    • Support for the AXP22x PMIC was added by Boris Brezillon, including the support for the regulators provided by this PMIC. This PMIC is used on a significant number of Allwinner designs.
    • A small number of Device Tree files were relicensed under GPLv2 and X11 licenses.
    • A big cleanup of the Device Tree files was done by using more aggressively the “DT label based syntax”
    • A new driver, sunxi_sram, was added to support the SRAM memories available in some Allwinner processors.
  • RTC subsystem:
    • As was announced recently, Free Electrons engineer Alexandre Belloni is now the co-maintainer of the RTC subsystem. He has set up a Git repository at to maintain this subsystem. During the 4.2 release cycle, 46 patches were merged in the drivers/rtc/ directory: 7 were authored by Alexandre, and all other patches (with the exception of two) were merged by Alexandre, and pushed to Linus.

The full details of our contributions:

by Thomas Petazzoni at August 31, 2015 08:53 PM

Video Circuits

How Video Post-Production Effects were done in the 80s

Continuing the theme of effects videos, here is a nice one about 80s era video effects.

by Chris ( at August 31, 2015 07:54 AM

August 19, 2015

Bunnie Studios

Name that Ware August 2015

The Ware for August 2015 is shown below.

I found this kicking around in the South China Material market this past June. It is indeed a production board (and still in use today!), so there is a definitive answer to this month’s challenge sitting somewhere in the cloud. The extensive use of CD4000 series CMOS chips in this board brings a little grin to my face — haven’t seen one of those in ages (except for the CD4066, which is still pretty handy even in contemporary situations).

Also, as a bonus, I found this in the same shop. This one isn’t for guessing, just for looking at. I’m a fan of FANUC.

As an administrative note, images from this site and the kosagi wiki, and a few other miscellaneous services, will be off-line for a bit on September 2nd. There’s maintenance work scheduled on the power grid at my flat, and so my servers will be brought off-line. If all goes well, it’ll be just 15 minutes. However, if the mains breaker to my unit doesn’t automatically reset, it could be up to a few hours before someone can get to it. I’ll be somewhere in Black Rock City, far from the Internet, while this all goes down…so if something really unfortunate happens, it could be a week before things get restored from backups.

by bunnie at August 19, 2015 10:31 AM

Winner, Name that Ware July 2015

The Ware for July 2015 was a bootlegged version of CAPCOM’s Carrier Air Wing. Congrats to pdw for nailing it, email me for your prize!

And a big thanks to Felipe Sanches for contributing last month’s ware and helping to judge the winner.

by bunnie at August 19, 2015 10:31 AM

August 16, 2015

Video Circuits

Video Screening in Tokyo

Alex organised a great screening in Tokyo check out the flyer

by Chris ( at August 16, 2015 07:16 AM

August 10, 2015


LM319M : weekend die-shot

LM319M - "high speed" (80ns) dual comparator.
Die size 2017x700 µm.

August 10, 2015 05:09 AM

August 03, 2015

Free Electrons

Free Electrons talks at the Embedded Linux Conference Europe

Father Mathew BridgeThe Embedded Linux Conference Europe 2015 will take place on October 5-7 in Dublin, Ireland. As usual, the entire Free Electrons engineering team will participate to the event, as we believe it is one of the great way for our engineers to remain up-to-date with the latest embedded Linux developments and connect with other embedded Linux and kernel developers.

The conference schedule has been announced recently, and a number of talks given by Free Electrons engineers have been accepted:

We submitted other talks that got rejected, probably since both of them had already been given at the Embedded Linux Conference in California: Maxime Ripard’s talk on dmaengine and Boris Brezillon’s talk on supporting MLC NAND (which we regret since Boris is currently actively working on this topic, so we are expecting to have some useful results by the time of ELCE, compared to his ELC talk which was mostly a presentation of the issues and some proposals to address them). Interested readers can anyway watch those talks and/or read the slides.

In addition to the Embedded Linux Conference Europe itself:

  • Thomas Petazzoni will participate to the Buildroot developers meeting on October 3/4, right before the conference.
  • Alexandre Belloni will participate to the OEDEM, the 2015 OpenEmbedded Developer’s European Meeting, taking place on October 9 after the conference.

by Thomas Petazzoni at August 03, 2015 12:08 PM

July 29, 2015


NC393 progress update and a second life of the NC353 FPGA code

Another update on the development of the NC393 camera: finished adding FPGA code that re-implements functionality of the NC353 camera (just with additional multi-sensor capability), including JPEG/JP4 compressors, IMU/GPS logger and inter-camera synchronization. Next step – simulation and debugging, and it will use co-simulating of the same sensor image data using the code of the existing NC353 camera. This involves updating of that camera code to the state compatible with the development tools we use, and so the additional sub-project was spawned.

Verilog code development with VDT plugin for Eclipse IDE

Before describing the renovation of the NC353 camera FPGA code I need to tell about the software we use for the last year. Living in the world where FPGA chip manufactures have monopoly (or duopoly as there are 2 major players) on the rather poor software tools, I realize that this will not change in the short term. But it is possible to constrain those proprietary creations in the designated “cages” letting them do only certain tasks that require secret knowledge of the chip internals, but do not let them take control of the whole development process, depend on them abandoning one software environment and introducing another half-made one as soon as you’ll get used to the previous.

This is what VDT is about – it uses one of the most standard development environments – Eclipse IDE, combines it with a heavily modified version of VEditor and the Tool Specification Language that allows developers to integrate additional tools without getting inside the plugin code itself. Integration involves writing tool descriptions in TSL (this work is based on the tool manufacturer manual that specifies command options and parameters) and possibly creating custom parsers for the tool output – these programs may be written in any programming language developer is comfortable with.

Current integration includes the Free Software simulation programs (such as Icarus Verilog with GtkWave). As it is safe to rely on the Free Software we may add code specific to these programs in the plugin body to have deeper integration and combine code and waveforms navigation, breakpoints support.

For the FPGA synthesis and implementation tools this software supports Xilinx ISE and Vivado, we are now working on Altera Quartus too. There is no VDT code dependence on the specifics of each of these tools, and the tools are connected to the IDE using ssh and rsync, so they do not have to run on the same workstation.

Renovating the NC353 camera code

Initially I just planned to enter the NC353 camera FPGA code into VDT environment for simulation. When I opened it in this IDE it showed more than 200 warnings in the code. Most were just unused wires/registers and signal width mismatch that did not impact the functioning of the camera, but at least one was definitely a bug – a one that gets control in very rare occasions and so is difficult to catch.

When I fixed most of these warnings and made sure simulation works, I decided to try to run ISE 14.7 tools and generate a functional bitstream. There were multiple incompatibilities between ISE 10 (which was last used to generate a bitstream) and the current version – most modifications were needed to change description of the I/O standard and other parameters of the device pins (from constraint file and “// synthesis attribute …” in the code to modern style of using parameters.

That turned out to be doable – first I made the design agree with all the tools to the very last (bitstream generation), then reconciled the generated pad report with the one generated with old tools (there are still some differences remaining but they are understandable and OK). Finally I had to figure out that I need to turn on non-default option to use timing constraints and how to change the speed grade to match the one used with the old tools, and that resulted in a bitstream file that I tested on just one camera and got images. It was a second attempt – the first one resulted in a “kernel panic” and I had to reflash the camera. The project repository has the detailed description how to make such testing safe, but it is still better to try using your modified FPGA code only if you know how to “unbrick” the camera.

We’ll do more testing of the bit files generated by the ISE 14.7, but for now we need to focus on the NC393 development and use NC393 code as a reference for simulation.

Back to NC393

Before writing simulation test code for the NC393 camera, I made the code to pass all the Vivado tools and result in a bitfile. That required some code tweaking, but finally it worked. Of course there will be some code change to fix bugs revealed during verification, but most likely changes will not be radical. This assumption allows to see the overall device utilization and confirm that the final design is going to fit.

Table 1. NC393 FPGA Resources Utilization
Type Used Available Utilization(%)
Slice 14222 19650 72.38
LUT as Logic 31448 78600 40.01
LUT as Memory 1969 26600 7.40
LUT Flip Flop Pairs 44868 78600 57.08
Block RAM Tile 78.5 265 29.62
DSPs 60 400 15.00
Bonded IOB 152 163 93.25
IDELAYCTRL 3 5 60.00
ILOGIC 72 163 44.17
OLOGIC 48 163 29.45
BUFGCTRL 16 32 50.00
BUFIO 1 20 5.00
MMCME2_ADV 5 5 100.00
PLLE2_ADV 5 5 100.00
BUFR 8 20 40.00
MAXI_GP 1 2 50.00
SAXI_GP 2 2 100.00
AXI_HP 3 4 75.00
AXI_ACP 0 1 0.00

One AXI general purpose master port (MAXI_GP) and one AXI “high performance” 64-bit slave port are reserved for the SATA controller, and the 64-bit cache-coherent port (AXI_ACP) will be used for CPU accelerators for the multi-sensor image processing.

Next development step will be simulation and debugging of the project code, and luckily large part of the code can be verified by comparing with the older NC353

by andrey at July 29, 2015 07:59 AM

July 19, 2015

Bunnie Studios

Name that Ware, July 2015

The Ware for July 2015 is shown below:

Ahh…hardware from the 80’s/early 90’s. My favorite era, when circuit board traces were laid out freehand using pen or tape and 74-series logic gates were still a thing. Thanks to Felipe Sanches for providing the ware, and to xobs for taking the photos while he was in Brazil for his keynote at FISL16!

Sorry for the lack of updates on this blog, it’s been a busy summer. To get a whiff of what I’ve been up to, check out my article in Wired Magazine on trends enabling the decentralization of innovation in hardware and Jinjoo’s blog-in-progress on the manufacturing bootcamp I held this summer in Shenzhen for MIT Media Lab students, which also happened to be the inaugural application of our new Orchard IoT Platform.

by bunnie at July 19, 2015 03:12 PM

Winner Name that Ware June 2015

The Ware for June 2015 is, in fact, an HV supply for driving an X-ray tube, and during normal operation it’s immersed in oil. I’ll give the prize to Matt Sieker, for being the first to correctly guess the ware.

Interesting that so many people found it to be “obviously” an HV supply for an X-ray tube — first time I had ever seen one! I found the construction details of the high voltage transformers to be interesting. Certainly a domain in which I have little direct design expertise.

by bunnie at July 19, 2015 03:11 PM

July 11, 2015


Mikron 1663RU1 - first Russian 90nm chip : weekend die-shot

Mikron is currently the most advanced microelectronic fab in Russia, located in Zelenograd. In 2010 they have licensed 90nm technology from STMicroelectronics, and equipment setup was somewhat ready by the end of 2012. Technology transfer was hindered by very small manufacturing volume and scarce funding. Nevertheless, 1663RU1 has became their first 90nm product reached commercial customers. It's 16 Mibit SRAM chip.

There are no redundancy or ECC correction on this chip, bulk-Si technology ("civilian" technology). There are no radiation-hardening tricks implemented. This chip is apparently intended for industrial/military applications, use in space is only possible with great care.

After metalization etch. Each small square is a matrix of 64x128 bit, 16 Mibit total.

Finally, SRAM cells itself. Cell area is 1.2 µm2, which is average level for 90nm tech (best ones are 1um2). Scale is 1px=57nm.

For comparison 180nm SRAM from ST Microelectronics in the same scale (STM32F100C4T6B).

If we take a look at the piece where bits of first metal preserved, we can see that Mikron is using litho-friendly SRAM design, where critical levels only use straight lines.

Here is Andrew Zonenberg's suggestion on 6T SRAM cell layout:

Die size 5973x6418 µm.

July 11, 2015 12:53 PM

July 10, 2015


GTX_GPL – Free Software Verilog module to simulate a proprietary FPGA primitive

Widespread high-speed protocols, which are based on serial interfaces, have become easier and easier to implement on FPGAs. If you take a look at Xilinx’s chips series, you can monitor an evolution of embedded transceivers from some awkwardly inflexible models to much more compatible ones. Nowadays even the affordable 7 series FPGAs possess GTX transceivers. Basically, they represent a unification of various protocols phy-levels, where the versatility is provided by parameters and control input signals.
The problem is, for some reason GTX’s simulation model is a secured IP block. It means that without proprietary software it’s impossible to compile and simulate the transceiver. Moreover, we use Icarus Verilog for these purposes, which doesn’t provide deciphering capabilities for now, and doesn’t seem to ever be able to do so:

Still, our NC393 camera has to use GTX as a part of SATA host controller design. That’s why it was decided to create a small simulation model, which shall behave as GTX, at least within some limitation and assumption. This was done so that we could create a full-fledged non-synthesizable verification environment and provide our customers with a universal within simulation purposes solution.

The project itself can be found at github. The implementation is still crude and contains only the bare minimum required to achieve our goals. However, it assumes a possibility to be broadened onto another protocol’s case. That’s why it preserves the original GTX structure, as it’s presented in Xilinx’s “7 Series FPGAs GTX/GTH Transceivers User Guide v1.11″, also known as UG476:
The overall design of the so called GTX_GPL is split into 4 parts, contained in a wrapper to ensure interface compatibility with the original GTX. These parts are: TX – transmitter, RX – receiver, channel clocking and common clocking.
All of the clocking scheme was based on an assumption of clocks, PLLs, and interconnects being ideal, so no setup/hold violation/metastability are expected. That itself makes the design non-synthesizable, but greatly reduces its complexity.

RX - Receiver

Receiver + Clocking

TX - Transmitter

Transmitter + Clocking

Transmitter and receiver schemes are presented in the figures. Each is provided with a clocking mechanism. You can compare it to GTX’s corresponding schemes (see UG476, pages 107, 133, 149, 169). As you can see, TX and RX lack the original functional blocks. However, many of them are important only for synthesis or precise post-synthesis simulation, like phase adjustments or analog-level blocks. Some of them (like the gearbox) are excessive for SATA and implementing them can be costly.
Despite all of that, current implementation passes some basic tests when SATA parameters are turned on. Resulting waves were compared to ones received by swapping GTX_GPL with the original GTX_CHANNEL primitive as a device-under-test, and they showed more or less the same behavior.

You can access to a current version via github. It’s not necessary to clone or download the whole repository, but enough to acquire ‘GTXE2_CHANNEL.v’ file from there. This file represents a collection of all necessary modules from the repository, with GTXE2_CHANNEL as a top. After including (or linking as a lib file/source file) it in your project, the original unisims primitive GTXE2_CHANNEL.v will be overridden.

If you find some bugs during simulation in SATA context or you want some features to be implemented (within any protocol’s set-up), feel free to leave a message via comments, PM or github.

Overall, the design shall be useful for verification purposes. It allows to create a proper GPL licensed simulation verification environment which is not hard-bound to a proprietary software.

by Alexey at July 10, 2015 03:04 AM

July 01, 2015

Bunnie Studios

Name that Ware, June 2015

The Ware for June 2015 is shown below.

Thanks tho Dan Scherer for contributing this ware! I don’t have a specific make/model number for it, but a general idea of what it’s for, so I’ll try my best to judge the submissions given partial information.

by bunnie at July 01, 2015 02:57 AM

Winner, Name that Ware May 2015

The Ware for May 2015 is a DVB antenna amplifier. The brand/model number is Draco-HDT2-7300. Lots of excellent submissions and in an act of total arbitrary judgment I’ll say pelrun is the winner for calling it as an amplified TV antenna first. Gratz, email me for your prize!

by bunnie at July 01, 2015 02:56 AM

June 29, 2015


BFG135 - NPN 7GHz RF BJT transistor : weekend die-shot

BFG135 - 7GHz RF NPN transistor with integrated emitter-ballasting resistors. Transistor is so sparse to lower thermal (mainly) and collector resistance.
Die size 668x538 µm, transistor fin halfpitch - 800nm.

Closer look:

June 29, 2015 12:07 AM

June 28, 2015

Video Circuits

Rob Schafer and Donny Blank - Interview from 1983 - Historical look at Video Synthesis

"Rob Schafer and Donny Blank - Interview from 1983 on the video synthesizer.
Posted by Video 4 ( Synopsis Video) - Denise Gallant"

by Chris ( at June 28, 2015 03:29 AM

June 24, 2015


nRF51822 - Bluetooth LE SoC : weekend die-shot

nRF51822 is a widely used Bluetooth LE SoC with Cortex-M0 core and on-chip buck DC-DC (LC are external).
Die size 3833x3503 µm, ~180nm technology.

June 24, 2015 10:02 AM

June 21, 2015

Video Circuits

Dan Bucciano

Dan Bucciano recently posted this fantastic clip to the discussion group so I thought I would share. It's some lovley black and white feedback processed with a color solarizer prototype he designed around 20 years ago.

by Chris ( at June 21, 2015 12:22 AM

June 18, 2015

Free Electrons

Buildroot 2015.05 release, Free Electrons contributions inside

Buildroot LogoThe Buildroot project has recently released a new version, 2015.05. With exactly 1800 patches, it’s the largest release cycle ever, with patches from more than 100 different contributors. It’s an impressive number, showing the growing popularity of Buildroot as an embedded Linux build system.

The CHANGES file summarizes the most important improvements of this release.

Amongst those 1800 patches, 143 patches were contributed by Free Electrons. Our most significant contributions for this release have been:

  • Addition of a package for the wf111 WiFi drivers. They allow to use a WiFi chip from Bluegiga, which is being used in one of our customer projects.
  • Addition of the support for using uClibc-ng. uClibc-ng is a “collaborative” fork of the uClibc project, which aims at doing more regular releases and have better testing. Maintained by Waldemar Brodkorb, the project has already seen several releases since its initial 1.0 release. Waldemar is merging patches from the original uClibc regularly, and adding more fixes. It allows Buildroot and other uClibc users to have well-identified uClibc stable versions instead of a 3 years old version with dozens of patches on top of it. uClibc-ng is not currently used as the default uClibc version as of 2015.05, but it might very well be the case in 2015.08.
  • Important internal changes to the core infrastructure. Until this release, the make legal-info, make source, make external-deps and make source-check logic was relying only on the Buildroot configuration file. This was giving correct results for target packages which all have a corresponding Buildroot configuration option, but not for host packages (which for most of them don’t have Buildroot configuration options). Only a manual two-level dependency handling was done for the host packages for the above mentioned commands. With our work, the handling of those features has been moved to be part of the package infrastructure itself, so it’s using proper make recursivity to resolve the entire dependency tree. Due to this, the results of make legal-info or make external-deps may be longer following this release, but it’s because it’s now actually correct and complete. You can look at the patches for more details, but these changes are very deep into the core Buildroot infrastructure.
  • Large number of build fixes. We contributed 52 patches fixing issues detected by the autobuild infrastructure.
  • Addition of the imx-usb-loader package, which can be used to load over USB a new bootloader on i.MX6 platforms, even if the platform has no bootloader or a broken bootloader. We also use it as part of one of our customer projects.

With 142 patches, Free Electrons engineer Thomas Petazzoni is the third contributor to this release by number of patches:

git shortlog -s -n 2015.02..

   397	Bernd Kuhls
   393	Gustavo Zacarias
   142	Thomas Petazzoni

But by far, our most important contribution by far for this release is Thomas acting as the interim maintainer: on the total of 1800 patches merged for this release, Thomas has been the committer of 1446 patches. He has therefore been very active in merging the patches contributed by the Buildroot community.

There are already some very interesting goals set for the Buildroot 2015.08 release, as you can see on the Buildroot release goals page.

Also, if you want to learn Buildroot in details, do not hesitate to look at our Buildroot training course!

by Thomas Petazzoni at June 18, 2015 08:34 AM

June 17, 2015

Video Circuits

Synapse by Christian Greuel

"Christian GreuelFake Space Labs / CalArts (1992)

An abstract work of visual music, “Synapse” is a stylized interpretation of the inner senses as they are lifted from a state of despondency to find temporary asylum in a delirious

moment of lucidity. This mindscape takes us on a ride through the turbulence of the psyche using vintage real-time 3D graphics and electronic synthesis technology.

The base graphics were created at Fake Space Labs during an Artist-in-Residency (1991-92) and repurposed for this work in 2003.

Video and Music: Christian Greuel
A/D Transfer: Aaron Ross (2003)
Thanks to: Mark Bolas and Eric Gullichsen

Graphics created at: Fake Space Labs (1992)
Video processed at: California Institute of the Arts (1992)
Music created at: California Institute of the Arts (1991)

Software: Sense8 WorldToolKit 1.0, AutoCAD (3D models), ColoRIX (2D textures)
Hardware: i386 PC (4MB RAM), DS1 DVI video card, CRT display, 3/4" video tape and camera
Video Processing Hardware: Hearne/EAB Videolab, Fairlight CVI
Audio: Roland SH-5 analog synthesizer, Ampex 456 4-track 1/4" analog tape"

by Chris ( at June 17, 2015 01:55 PM

June 14, 2015

Video Circuits

McConnell Macro Video Synthesis System

Here is something you don't see every day a Amiga Video Toaster/ Atari Falcon based video synthesizer with multiple other signal paths going on, thanks to Matthew McConnell for the upload!  Not your standard analogue set up and much closer to systems from the mid 90s

by Chris ( at June 14, 2015 10:44 AM

June 12, 2015

Free Electrons

Free Electrons engineer Alexandre Belloni co-maintainer of Linux Atmel processor support

Atmel SAMA5After becoming the co-maintainer of the Linux RTC subsystem, Free Electrons engineer Alexandre Belloni also recently became a co-maintainer for the support of Atmel ARM processors in the Linux kernel.

Free Electrons has been working since early 2014 with Atmel to improve support for their processors in the mainline kernel. Since this date, our work has mainly consisted in:

  • Modernizing existing code for Atmel processors: complete the switch to the Device Tree and the common clock framework for all platforms, rework all that was needed to make Atmel processor support compatible with the ARM multiplatform kernel, and do a lot of related driver and platform refactoring.
  • Implement a complete DRM/KMS driver for the display subsystem of the most recent Atmel processors.
  • Upstream support for the Atmel SAMA5D4, the latest Cortex-A5 based SoC from Atmel.

Thanks to this long-term involvement from Alexandre Belloni and Boris Brezillon, Alexandre was appointed as a co-maintainer of Atmel support, replacing Andrew Victor who hasn’t been active in kernel development since quite some time. He is joining Nicolas Ferre and Jean-Christophe Plagniol-Villard in the team of maintainers for the Atmel platform.

Alexandre has sent his first pull request as an Atmel co-maintainer on May 22, sending 9 patches to the ARM SoC maintainers, planned for the 4.2 kernel release. His pull request was quickly merged by ARM SoC maintainer Arnd Bergmann.

Free Electrons is proud to have one of its engineers as the maintainer of one very popular embedded Linux platform, which has had since many years a strong commitment of upstream Linux kernel support. Alexandre is the third Free Electrons engineer to become an ARM platform maintainer: Maxime Ripard is the maintainer of Allwinner ARM processor support, and Gregory Clement is the co-maintainer of Marvell EBU ARM processor support.

by Thomas Petazzoni at June 12, 2015 12:24 PM

June 11, 2015

Free Electrons

Embedded Linux Projects Using Yocto Project Cookbook

Embedded Linux Projects Using Yocto Project Cookbook Cover

We were kindly provided a copy of Embedded Linux Projects Using Yocto Project Cookbook, written by Alex González. It is available at Packt Publishing, either in an electronic format (DRM free) or printed.

It is written as a cookbook so it is a set of recipes that you can refer to and solve your immediate problems instead of reading it from cover to cover. While, as indicated by the title, the main topic is embedded development using Yocto Project, the book also includes generic embedded Linux tips, like debugging the kernel with ftrace or debugging a device tree from U-Boot.

The chapters cover the following topics:

  • The Build System: an introduction to Yocto Project.
  • The BSP Layer: how to build and customize the bootloader and the Linux kernel, plenty of tips on how to debug kernel related issues.
  • The Software layer: covers adding a package and its configuration, selecting the initialization manager and making a release while complying with the various licenses.
  • Application development: using the SDK, various IDEs (Eclipse, Qt creator), build systems (make, CMake, SCons).
  • Debugging, Tracing and Profiling: great examples and tips for the usage of gdb, strace, perf, systemtap, OProfile, LTTng and blktrace.

The structure of the book makes it is easy to find the answers you are looking for and also explains the underlying concepts of the solution. It is definitively of good value once you start using Yocto Project.

Free Electrons is also offering a Yocto Project and OpenEmbedded training course (detailed agenda) to help you start with your projects. If you’re interested, join one of the upcoming public training sessions, or order a session at your location!

by Alexandre Belloni at June 11, 2015 10:07 AM

June 10, 2015


NC393 progress update: HDL code for sensor channels is ported or re-written

Quick update: a new chunk of code is added to the NC393 camera FPGA project. It is a second (of three needed to match the existing NC353 functionality) major parts of the system after the memory controller is finished. This code is just written, it still has to be verified by the simulation first, and then by synthesizing and by running it on the actual hardware. We plan to do that when the third part – image compressors will be ported to the new system too. The added code deals with receiving data from the image sensors and pre-processing it before storing in the video memory. FPGA-based systems are very flexible and many other configurations like support of multi-lane serial interface sensors or using several camera ports to connect a single large high-speed sensor are possible and will be implemented later. The table below summarizes parameters of the current code only.

Table 1. NC393 Sensor Connections and Pre-processing
Feature Value
Number of sensor ports 4
Total number of multiplexed sensors 16
Total number of multiplexed sensors with existing 10359 multiplexer board 12
Sensor interface type (implemented in HDL) parallel 12 bits
Sensor interface hardware compatibility parallel LVCMOS/serial differential 8 lanes + clock
Sensor interface voltage levels programmable up to 3.3V
Number of I²C sequencers 4 (1 per port)
Number of I²C sequencers frames 16
Number of I²C sequencers commands per frame 64
I²C sequencers commands data width 16/8 bits
Image data width stored 16/8 bits per pixel
Gamma conversion regions per port 4
Histograms: number of rectangular ROI (Regions of Interest) per port 4
Histograms: number of color channels 4
Histograms: number of bins per color 256
Histograms: width per bin 18 or 32 bits
Histograms: number of histograms stored per sensor 16

Up to 4 sensor channel modules can be instantiated in the camera, one per each of the sensor ports. In most applications all ports will run at the same clock frequency, but each of them can use a different clock and so heterogeneous sensors can be attached if needed. Current modules support 12 bit parallel data (such as Aptina MT9P006 we currently use), 8-lane+clock serial differential interface will be added later.

Sensor modules include programmable delay elements on each input line to optimize sampling of the data and a small FIFO to compensate for the phase variations between the system free running clocks and the sensor output clocks influenced by the sensors and optional multiplexer PLLs.

Similarly to the NC353 sensor modules contain dedicated I²C sequencers. These sequencers allow to synchronize I²C commands sent to the sensors with the sensor frame sync signals, they also reduce response time requirements to the software – the commands to be issued can be scheduled ahead of time to be executed at the certain frame number.

Each of the sensor channels is designed to be compatible with a sensor multiplexer, such as the 10359 used in the current Elphel multi-sensor cameras. These boards connect to three sensor boards and present themselves to the system as a single large sensor. Images are acquired simultaneously by all 3 imagers, one is immediately routed downstream and the two others are stored in the on-board memory. After the first image is transferred to the camera system board, data from the other two sensors is read from the memory and transferred in the same format as received from the sensors, so the system board receives data as if from the sensor with 3 times more lines. What is different in the NC393 camera code in comparison with NC353 is that now code is aware of the multiplexers and is able to apply different conversion to each sub-image and calculate histograms (used for autoexposure and white balance) for each sub-image. Current NC353 camera (and multisensor cameras based on the same design) have the same settings for the whole composite image coming from the multiplexer and have only one histogram window of interest.

Channel modules and parameterized and can be fine-tuned for particular applications to reduce resource usage. For example, the histogram modules can be either 18 (sufficient in most cases) or full 32 bit wide, histogram data may be buffered (required only for sensor with very small vertical blanking when using full frame histogram WOI) or not buffered. Depending on these settings either 1 or two block RAM hard macros are instantiated.

Histogram data generated from all 4 ports (from up to 16 sensors) is transferred to the system memory, and each of the 16 channels store data for the last 16 frames acquired. This multi-frame data storage eases timing requirements to the software that processes the histograms. This data is sent over the general purpose S_AXI_GP0 port. This medium-speed interface is quite adequate for this amount of data, high speed 64-bit wide AXI_HP* are reserved for the higher bandwidth image transfers.

by andrey at June 10, 2015 02:54 AM

June 09, 2015

Free Electrons

Embedded Linux and kernel job openings for 2015

At Free Electrons, we are starting to get more and more requests for very cool projects. As it can be very frustrating to turn down very interesting opportunities (such as projects that allow us to contribute to the Linux kernel, Buildroot or Yocto Projects), we have decided to look for new engineers to join our technical team.

Job description in a nutshell

  • Technical aspects: mainline Linux kernel development, Linux BSP and embedded Linux system integration, technical training
  • Location: working in one of our offices in France (Toulouse or Orange)
  • Contract: full-time, permanent French contract

Mainline Linux kernel development

Believe it or not, we now have an increasing number of customers contracting us to support their hardware in the mainline Linux kernel. They are either System on Chip manufacturers or systems makers, who now understand the strong advantages brought by mainline Linux kernel support to their customers and to themselves.

You can see the results: Free Electrons is now consistently in the top 20 companies contributing to the Linux kernel. We are even number 6 for Linux 4.0!

Note that this job doesn’t only require technical skills. It also has a strong social dimension, having to go through multiple iterations with the community and with kernel subsystem maintainers to get your code accepted upstream.

Linux BSP and embedded Linux system integration

Such activity involves developing and integrating everything that’s needed to deploy Linux on the customer hardware: bootloader, kernel, build environment (such as Buildroot or the Yocto project), upgrade system, optimizing performance (such as boot time) and fixing issues. Another way is to provide guidance and support to customer learning to do such a job.

As opposed to Linux kernel development projects which are often long term ones (though with step by step objectives which can be reached in days), these are usually shorter and more challenging projects. They allow us to stay in touch with the real-life challenges that customer engineers face every day, and that require to achieve substantial results in a relatively small number of days.

Such projects also constitute opportunities to contribute improvements to the mainline kernel and bootloader projects, as well to the build system projects themselves (Buildroot, Yocto Project, OpenWRT…).

Training and sharing experience

Knowledge sharing is an important part of Free Electrons mission and activity. Hence, another important aspect of the job is teaching, maintaining and improving Free Electrons training courses.

You will also be strongly incited to share your technical experience by writing blog posts or kernel documentation, and by proposing talks at international conferences, especially the Embedded Linux Conference (USA, Europe).


  • Experience: we are open to both experienced engineers and people just going out of engineering schools. Though prior experience with the technical topics will be an advantage, we are also interested in young engineers demonstrating great potential for learning, coding and knowledge sharing. People having made visible contributions in these areas will have an advantage too.
  • Language skills: fluency in oral and written English is very important. French speaking skills won’t be a requirement, but an advantage too.
  • Traveling: for training sessions and conference participation, you will need the ability to travel rather frequently, up to 8-10 times a year.
  • Ability to relocate, to one of our offices in France, either in Toulouse or in Orange, to strengthen our engineering teams here.

Details about Toulouse and Orange

  • Toulouse is a dynamic city with lots of high-tech and embedded systems companies in particular. Our office in Colomiers can easily be reached by train from downtown Toulouse if you wish to settle there. You would be working with Boris Brezillon, Antoine Ténart, Maxime Ripard and our CTO Thomas Petazzoni.
  • Our main office is settled in Orange in the heart of the Provence region, close to Avignon, a smaller but dynamic city too. It enjoys a sunny climate and the proximity of the Alps and the Mediterranean sea. Accommodation is very affordable and there are no traffic issues! You would be working with our founder Michael Opdenacker and of course remotely with the rest of the engineering team. In particular, we are interested in foreign engineers who could help us develop our services in their home countries.

We prefer not to offer home based positions for the moment, which have their own complexity and cost, while we have plenty of space left in our current offices.

See a full description and details about how to contact us.

by Michael Opdenacker at June 09, 2015 07:50 PM

June 03, 2015


RGB flicker LED : weekend die-shot

Unlike previous LED, this one is completely deterministic: diodes differ slightly only in RC oscillator frequency (~±10%). Regular structure at the lower-left side suggests that it's some sort of microcode-driven design.

Die size 553x474 µm, 1.5µm technology.

Thanks for this interesting chip to ASIP department of Gomel State University.

After metalization etch:

June 03, 2015 12:56 AM

Flicker LED : weekend die-shot

Some might have seen candle flicker LED - their brightness is modulated randomly to mimic real candle. It is achieved by using digital die copackaged with red LED die in standard 5mm transparent case.

This design is apparently using phase difference between 2 RC oscillators as source of random data. There are multiple designs in the wild, some other apparently based on LFSR with single oscillator. More on the topic:,,

Die size 580x476 µm, 3µm technology.

Thanks for this interesting chip to ASIP department of Gomel State University.

After metalization etch:

June 03, 2015 12:19 AM

June 02, 2015

Free Electrons

New training course on Buildroot: materials freely available

Buildroot LogoLast year, Free Electrons launched a new training course on using the Yocto Project and OpenEmbedded to develop embedded Linux systems. In the selection of build system tools available in the embedded Linux ecosystem, another very popular choice is Buildroot, and we are happy to announce today that we are releasing a new 3 days training course on Buildroot!

Free Electrons is a major contributor to the Buildroot upstream project, with more than 2800 patches merged as of May 2015. Our engineer Thomas Petazzoni alone has contributed more than 2700 patches. He has gathered an extensive knowledge of Buildroot and its internals, being one of the primary authors of the core infrastructures of Buildroot. He is a major participant to the Buildroot community, organizing the regular Buildroot Developer Days, supporting users through the mailing list and on IRC. Last but not least, Thomas acts as an interim maintainer when the main Buildroot maintainer is not available, an indication of Thomas strong involvement in the Buildroot project.

In addition, Free Electrons has used and is using Buildroot in a significant number of customer projects, giving us an excellent view of Buildroot usage for real projects. This feedback has been driving some of our Buildroot contributions over the last years.

The 3 days training we have developed covers all the aspects of Buildroot: basic usage and configuration, understanding the source and build trees, creating new packages including advanced aspects, analyzing the build, tips for organizing your Buildroot work, using Buildroot for application development and more. See the detailed agenda.

buildroot-slidesWe can deliver this training course anywhere in the world, at your location (see our rates and related details). We have also scheduled a first public session in English in Toulouse, France, on November 30 to December 2. Contact us at if you are interested.

And finally, last but not least, like we do for all our training sessions, we are making the training materials freely available under a Creative Commons BY-SA license, at the time of the training announcement: the first session of this course is being given this week. For the Buildroot training, the available materials are:

Our materials have already been reviewed by some of the most prominent contributors to Buildroot: Peter Korsgaard (Buildroot maintainer), Yann E. Morin, Thomas De Schampheleire, Gustavo Zacarias and Arnout Vandecappelle. We would like to take this opportunity to thank them for their useful comments and suggestions in the development of this new training course.

by Thomas Petazzoni at June 02, 2015 08:51 PM

May 30, 2015

Bunnie Studios

Name that Ware, May 2015

The Ware for May 2015 is below.

Thanks to xobs for contributing this ware!

by bunnie at May 30, 2015 02:18 PM

Winner, Name that Ware April 2015

The Ware for April 2015 is a control board from a Keyence VE-7800 SEM, which I bought with some friends for a steal at a used equipment shop. Unlike my previous SEM adventure, this one is in good working order.

Nobody guessed it correctly, but I liked Dave Z’s analysis, and also Paul Campbell’s comment about two engineers at war. It’s a nice mental image :) However, the fact that Christian Vogel picked up on the vacuum flange in the background, that was really subtle, so I’ll declare him the winner. Gratz, email me for your prize!

by bunnie at May 30, 2015 02:18 PM

May 24, 2015


Communicating XML concisely using Swift

Some background…

There are plenty of XML haters out there but this is a great quote:

XML is like violence: if it doesn’t solve your problem, you aren’t using enough of it.

Having spent quite a bit of time working with Lisp-like languages I am used to feeling syntax-fuelled hate. However, if what I send across the wire is not XML but something very concise I am not overly bothered about syntax. I will use XML, JSON, S-expressions, or whatever, as long as there are decent parsers. There have been numerous attempts to invent a new markup language or abstract syntax to represent the same kinds of data. When it comes to concisely transferring this data over a network you can adopt two approaches. You can send everything in one message or you can separate out the data you need to communicate from the structure and types but still allow the data to be reconstructed by the receiver. So you could take a chunk of markup and apply some text compression to it and send that binary across your network or you could send a minimal amount of data encoded into binary that relies on a schema or protocol to reconstruct it. The latter has the advantage of allowing the schema to be used multiple times on different sets of data as long as they conform to the schema. This means you can not only send less data but you can take advantage of fast encode/decode cycles by processing the schema once at startup and using that optimised version each time you encode and decode. This also provides the added benefit of allowing validation to take place on each encode and decode, something that text compression has no clue about. ASN.1 led the way in this approach. It was very powerful but complex which made tool support challenging. Google’s Protocol Buffers follows a similar approach but has a less abstract syntax to provide an easier way to map the data and types to programming languages. In the XML world there have been many attempts at compressing this verbose markup. In the end EXI emerged as the standard approach. However, I think the design is also flawed. In my opinion EXI suffers from the same problems as ASN.1, being overly complex to implement. That means you will struggle to find credible implementations outside of enterprise computing. I also don’t think it makes sense to try and invent a general serialisation approach to XML because there are too many caveats. At some point you will give in and employ text compression instead. So when I decided to make an XML serialisation tool I wanted to recognise these limitations. Packedobjects was based on ASN.1 but represented in a subset of XML Schema. It has a limited set of data types that are enough to write network protocols. It deliberately restricted XML Schema to control things like the order of data and the way data repeated. For example, I don’t think it makes much sense to support a set data type when machines are pretty good at generating things in the same order each time.

Going mobile…

If you are working on restricted platforms such as mobile or embedded devices, in the end it is all about tool support. This is where I believe XML does well. If you want to support parsing a schema language efficiently you probably have few options. Libxml2 does a great job of this and it does it quickly. What’s more, this parser is everywhere. For example, it can be in your pocket right now if you own an iPhone. I decided to see how Packedobjects would perform on iOS if I wrapped the current API in a more high-level interface that worked with strings rather than expose the more lower-level Libxml2 doc type API. The porting process was fairly quick and painless. I built an example program that can take all the XML files in the Packedobjects repository and ran them to get performance metrics.

Result of decoding 1000 encodes/decodes
The screenshots show the results running on a 5th gen iPod. This example is available to try out here. There is quite a big discrepancy between encoding and decoding speed performance but overall I am pleased at how the tool performs on the devices I tried. I will be adding support for 64bit encoding and decoding soon to see what impact this has on an iPhone 6.

Size matters…

One thing I avoided talking about in this post until now is the key metric of encoding size. Rather than believe what I say you need to pick your data set and try for yourself. For the type of data I work with, Packedobjects outperforms other approaches I tried. I would classify this data as highly structured and not dominated by string data types. So the kind of data that might originate from the Internet Of Things (IoT), sensor networks, network management and so on.

by john at May 24, 2015 01:30 PM

Andrew Zonenberg, Silicon Exposed

Graduating, TDR prototype, lab move, and a conference talk

So, it's been a busy couple of months and I haven't had time to post anything. Here's a few quick updates:

I successfully defended my Ph.D thesis few weeks ago and will be graduating next weekend. You can download the thesis and browse the code if you're so inclined. I plan to continue developing the project in my spare time and using it as the basis for future embedded gadgets so expect more posts on it over the coming months.

The TDR board came in and I assembled the first prototype. It had a few bugs (which I'll detail in a future post) but after reworking them all of the major subsystems seem functional and I'm working on the firmware. Expect another post over the summer once I've made more progress.
TDR prototype during bring-up.
This was my improvised jig for holding probes on small test points.
Now that I'm done with school I'm moving across the country in a week to start work at my new job. My lab is currently living in 125 cardboard boxes weighing just shy of 1900 pounds. (This doesn't count my desktop computer and monitors, test equipment, microscope bench, or the servers and rack as they're all still in use and being packed up over the next few days.) The lab will be totally down for 1-2 months.

The current state of my lab
Finally, I will be speaking on CPLD reverse engineering at REcon this June. The topic of the talk is the XC2C32A, which my regular readers may remember from a previous post. I'll be describing the reverse engineering process, how far I got, and showcasing bitstreams generated by my toolchain running on actual silicon. If any you are going, by all means introduce yourself :)

by Andrew Zonenberg ( at May 24, 2015 12:42 AM

May 15, 2015

Free Electrons

ELC 2015 videos available

The videos from the last Embedded Linux Conference that took place late March in San Jose, California, are now available on Youtube! This represents a lot of interesting and useful content about embedded Linux topics.

You’ll find below the videos of the three talks given by Free Electrons engineers at this Embedded Linux Conference.

An Overview of the kernel DMAEngine subsystem, Maxime Ripard

MLC/TLC NAND Support: Challenges for MTD/NAND Subsystem, Boris Brezillon

The Device Tree as a Stable ABI: A Fairy Tale?, Thomas Petazzoni

by Thomas Petazzoni at May 15, 2015 07:19 AM

May 12, 2015

Michele's GNSS blog

A modern GNSS front-end

Earlier this year I had the occasion and privilege to be trying out a new front-end produced by NTLab, the NT1036. I thought it would be interesting to share this with the GNSS crowd.
The kit arrived composed by two separate boards: a control board and the actual chip evaluation board, as well as a CD with the software and detailed data-sheet. The controller board connects seamlessly to the evaluation one by means of a single flat cable with RJ12 ends. Although the suggested supply voltage is 3.0V pm5%, it was very convenient to use the same cable to power the board with 3.3V. Also, having a single common supply avoids currents on the control lines. In the end the chip worked fine in this configuration so I assume it was a safe choice to take.
The chip has many unique characteristics that make it suitable for a modern GNSS receiver. The ones of greatest interest to me are the following:
  • Four independent input channels
  • Two wideband VCO banks, on high and low RNSS bands, which can be routed with great flexibility amongst the four mixers, in particular allowing:
    • GPS/Glonass L1+L2 or L1+L5
    • GPS/Beidou L1+L2 or L1+L5
    • All 4 channels on either L1 or L2/L5
  • 3.0V supply voltage and low power dissipation (ideal for USB-powered devices)
  • Analog or digital output options for IF (real-only, which I like best) and clock lines.
  • Small, easy to assemble package
Obviously, the killer applications for this kind of chip are well contained antenna arrays and multi-frequency multi-constellation hardware and software receivers.
Having a lot of testing equipment at hand one could really crack a nut like this one. However, with limited hardware at hand I decided to use my SdrNav40 board and slightly modify its firmware so to ignore the 4 on-board RF channels and capture instead the evaluation kit outputs and clock.
Figure 1: Test setup, with 4 way power splitter and SdrNav40 powering the antenna.
Figure 2: Closeup of the test setup
Two tests were particularly useful for me: GPS/Glonass L1+L2 and four channels on L1. The first should lift any doubt on the potential fields of application of the chip. The second should solve my curiosity on phase behaviour of common LO (Local Oscillator) MI (Multiple-Input) front-ends.
The GUI to control the configuration of the NT1036 is incredibly rich and professional: low hanging fruit for a curious engineer.
Figure 3: NT1036 configuration tool: general settings tab, where the synthesizers can be programmed
Figure 4: NT1036 configuration tool: channels 1 and 2 tab
Figure 5: NT1036 configuration tool: main chip blocks tab
For GPS/Glonass reception the tuner offers a default configuration with the two VCO banks tuning in the middle between GPS L1 and Glonass G1 (and similarly for L2/G2), thus having GPS in high-side mixing and Glonass in low-side mixing. Configurable IF filter banks select one or the other. The distance between the centre frequencies (being about 26 MHz on 20 MHz for high and low RNSS respectively) suggests a L1 plan in which a FS of about 52 MHz puts both carriers around FS/4 for ease of down-conversion. Setting FS to 53MHz (derived from an integer PLL) allows achieving GPS L1 on 14.58 MHz. Plots that everyone likes follow.
Figure 6: PSD of samples acquired in high-injection mode on L1 at about 50Msps
Figure 7: Histogram and time series of the signal acquired with NT1036 (sign and magnitude output)
Figure 8: Results of GPS satellites acquisition.

I have in mind to continue my tests on the chip, subject to time which is always very little!
Till next time...

by (Michele Bavaro) at May 12, 2015 08:43 PM

May 10, 2015


NC393 Development progress: Multichannel memory controller for the multi-sensor camera

Development of the NC393 camera has just passed an important milestone  – we completed HDL code that constitutes the core of this new camera, tested most of the Zynq-specific features that were not available in the older Spartan-3 FPGA used in our current NC353 devices. Next development phase  will involve porting some of the existing code that deals with sensor interfacing, gamma correction, histograms, color conversion and JPEG/JP4 compression – code that was tested in the thousands of cameras and many billions of processed images, including the applications listed in Wikipedia. New camera is designed primarily for the multisensor applications – up to four connected directly to the system board and more through the multiplexers as we currently do in Eyesis4π cameras. It is the memory controller that had to be redesigned completely, the sensor and compressor channels can reuse most of the existing code and just have 4 instances of the same modules instead of a single one. Starting early this year I’ve got an opportunity to put aside other projects and work full time on the new camera code.

FPGA features different from the previous Elphel cameras

The new features tested include I/O elements needed to implement DDR3 interface (described in the earlier posts) and communication between the ARM cores (PS – processing system) and the FPGA (PL – programmable logic). Zynq has multiple channels of communication based on AXI standards, 2 of the interface types are used in the current design:
SAXI_GP0 – general purpose memory-mapped interface controlled by the processors, convenient to write data to various registers inside the FPGA fabric that determine the operation of the device. Read channel of the interface allows CPU to get status information back from the PL.  This interface is 32-bit wide and it is not intended for high bandwidth applications.
AXI_HP0 – high speed channel allowing 64-bit wide transfers between the system memory and the FPGA logic. Zynq offers 4 of such channels, current design uses one to implement a two-directional “bridge” between the system memory and the dedicated DDR3 device connected to the FPGA and used as an image/video buffer. Two of the remaining channels will be used to transfer compressed images to the system memory (to stream out and/or record to HDD/SSD), and one for SATA interface.
Other AXI channels that are not yet used in NC393 code include ACP (Accelerator Coherency Port) that has the same bandwidth as the AXI_HP, but “sees” the memory the same way as the processors do (through the same cache levels), this port is intended as its name suggests for the “accelerators” – programmed logic tightly coupled with the CPU, where the latency is critical but the amount of data transferred is relatively small, so it will not disturb the normal cache usage by the processors.
Implementation of the HDL code that interacts with these AXI ports took more time than it should, partly because the Zynq manufacturer does not provide HDL code for simulation, only proprietary encrypted modules are available – modules that are useless for our preferred Free Software tools. When I tried to simulate AXI interfaces I only got the output from the following statement:
$display("Warning on instance %m : The Zynq-7000 All Programmable SoC does not have a simulation model. Behavioral simulation of Zynq-7000 (e.g. Zynq PS7 block) is not supported in any simulator. Please use the AXI BFM simulation model to verify the AXI transactions.");
We had to implement both the synthesizable HDL modules for our product and the simulation code for SAXI_GP and AXI_HP missing from the software distribution. This code definitely has limitations compared to the proprietary encrypted one – we implemented only the features needed in our design (for AXI_HP it does not provide 32-bit bus functionality).  Nevertheless  it seems to work for our application and is now available under GNU GPLv3 license for others to use as a part of x393 project at GiHub.

Custom memory controller

External memory controller is a rather intimate part of the system design and I do not believe it is possible to create an efficient one-size-fits-all code. Yes, Xilinx offers MIG IP that can be inserted into your custom design, but we need more control over what is going on inside it, the earlier post “DDR3 Memory Interface on Xilinx Zynq SOC – Free Software Compatible” describes the physical layer (PHY) of the implementation. Dynamic RAM devices impose multiple access  restrictions, and the general purpose memory controller essentially tries to hide these details from the processes that use the memory, while keeping the data rate as close as possible to the theoretically available (clock frequency multiplied by bus width multiplied by two for DDR devices).
Some of the main specifics of the dynamic RAM devises are:

  • Memory is page-oriented, access within the same page is fast, but opening/closing pages (“activate” and “precharge” terms are used in the device manuals) is slow
  • Data transfer happens in multi-word “bursts”, DDR3 devices have normal bursts of 8 words (width depends on the memory organization) and short ones of 4 bursts, but short bursts use the same time as the 8-long ones so they do not offer advantage when transferring large amount of data. For our application we can consider memory device to be 128-bit (8*16 bits) wide
  • Memory array is divided into  “banks” (DDR3 has 8 of them), and transfers to/from one bank can take place with simultaneous activation/precharging of other one(s) as these operations do not use the data bus.

These features provide a clue – how to get a high average bandwidth. Basically there are 2 strategies:

  • Consolidate multiple accesses to the same page. In the simplest form (common for the camera designs) write consecutive memory locations (like fill memory with the scan-line data from the sensor). With 16-bit wide memory it is possible to transfer up to 2048 bytes at the full memory bandwidth with just one “activate” in the beginning and one “precharge” (or auto-precharge) in the end.
  • Design the memory addressing in such a way, that translation of the linear address to physical bank, page number (“row address” in DRAM terminology) and in-page address (“column address”)  makes it likely to simultaneously operate multiple banks.

While the first clue is easy to follow, the second one is not. Depending on the particular clock speed/timing parameters, you may need 3-5 banks to interleave to provide full data bandwidth utilization, something rather difficult to achieve for random data access without making special assumptions about the nature of the application data.
Camera deals mostly with the 2-d data arrays and majority of scenarios use either sequential (scan-line ordered) access or depend on 2-d locality of the pixels (compression, de-warping, correlation, filtering, and more). This mode can use tiled access and read/write small rectangular pixel areas as atomic operations.  Contrary to the general processing memory, latency is not usually critical for the image memory,  access patterns are predictable and can be pre-optimized in advance, not at the run time during memory access.
This allows to optimize a custom memory controller dedicated to image acquisition, processing and compression, and in our case support multiple image sensors operating in parallel. Particular application may include optical image-guided UAVs and other robotic devices.

Memory mapping and access types

Mapping of the 2-d imaging objects to the DRAM memory addresses targets both sequential and tiled accesses. Each image scan line uses a single bank address (0..7) and increasing column addresses (2048 bytes or 128 bursts), then row addresses. Each group of 8 lines share the same row/column addresses and have individual banks for each row as shown on Fig.1.

Figure 1. Memory layout for 2-d image objects

Figure 1. Memory layout for 2-d image objects

Atomic memory accesses are currently limited to ¼ of the 4KB BRAM memory blocks available in Xilinx Zynq FPGA part, that makes 64 bursts or ½ of the memory page. Crossing page boundary during sequential access requires precharge and activation of the different memory pages in the same bank, so while the code can split accesses automatically it is beneficial to align the full frame width to the multiple of the 64 bursts (1024 8-bit or 512 of 16-bit pixels).

Scanline frame access

Memory controller provides application modules with a scanline windowed access to the image frames defined by the memory start address and the full (possibly padded) frame width, measured in 16-byte bursts. Access window is defined by conventional X0, Y0, width (in bursts) and height (in lines/pixels).

Figure 2 Access window in scanline mode

Figure 2 Access window in scanline mode

Scanline access module splits the requested window into a sequence of up to 64-burst data transfers, generates “page ready”, “frame ready” signals to application module, accepts “frame start”, “next page” signals. It also supports inter-channel synchronization by providing “next line number” output and “suspend” input. External module can compare last line number acquired from the sensor input channel and suspend compressor/image processing module, providing low-latency video.

Tiled frame access

Many image processing and compression algorithms consume or generate 2-d blocks(tiles) of data. Some applications require overlapping tiles, including regular JPEG compression of color images. While compression algorithm itself uses non-overlapping 8×8 pixel blocks (16×16 macroblocks for 4:2:0 mode), extra pixels around the blocks are needed for Bayer-to-YCbCr conversion that is convenient to implement right in front of the compressor where the data is already available in 2d format, not in scanline order as it comes out of the sensor.

Figure 3 Access window in tiled mode

Figure 3 Access window in tiled mode

Tile overlap is needed both horizontally and vertically, but horizontal overlap is easy to implement in the application module just by using already buffered (in FPGA BlockRAM) data from the previous tile, while vertical overlap would need buffering the whole width of the sensor that would be not scalable for high resolution sensors and would require extra BlockRAM modules in the fabric. This is why the memory controller module provides only vertical tile overlap, accepting 3 byte-wide (width is limited by the total “area” of 64 bursts in a tile)  parameters – tile width, tile height and tile step in addition to X0, Y0, window width and window height.

Tile internal structure

Memory controller provides support for the 2 types of tiles. First type (Tile16) maps data to the sequence of bursts as vertical columns, each burst representing horizontal row of 16 (8-bpp mode) or 8 (16bpp mode) pixels.

Figure 4a: Tile16 - tile with 1 burst-wide columns

Figure 4a: Tile16 – tile with 1 burst-wide columns

Figure 4b: Tile32 - tile with 2 burst-wide columns

Figure 4b: Tile32 – tile with 2 burst-wide columns

Columns are traversed up to down, then left to write as shown on Fig. 4a. Due to memory timing restrictions this mode allows only some values for the tile height (0,6 and 7 modulo 8). Tile32 allows more variants for the tile height as there is more clock cycles between re-opening different page for the same bank, it can be (0,3,4,5,6,7 modulo 8). All tiles with the height of less than or equal to 8 are valid as it is possible to keep all banks open between columns of a tile, all heights are valid for the single-column tiles too. Single-column tile32 of maximal size (64 bursts) corresponds to a square area of 32×32 pixels in 8 bits per pixel mode.

NC393 HDL code and the memory controller implementation

Elphel camera code is built around the 16-channel DDR3 memory controller and at this stage the only modules that are not part of this controller are command and status distribution networks, system memory to external memory bridge over AXI_HP and temporary test modules to test controller functionality.

Figure 5. Memory controller block diagram

Figure 5. Memory controller block diagram

Command and status networks

Command distribution tree is designed to write data to various memory-mapped registers distributed over the whole design. All these registers are write-only (readback is optionally provided by a separate Block RAM-based module), so data paths can accommodate any number of register slices if needed to meet timing. This bus is a light-weight to minimize required routing resources of the FPGA, it requires only 9 data signals (9 address/data and a strobe) and can deliver 0 to 32 bits of data (configured by parameters at the destination module) sent over 1 to 6 clock cycles. Command distribution tree accepts commands from the software over the MAXI_GP0 or from a PL sequencer driven by the frame synchronization signals from the sensors – it will be ported from the current NC353 camera HDL code.
Status receive tree supplements the command tree and provides processing system with a feedback data from the distributed over the FPGA fabric modules. It includes a 256×32 register file available for PS read access with zero latency and a unidirectional tree of light-weight (10 signals) network that also includes multiplexers and status transmitters. Multiplexers route the messages (up to 6 clock cycles long depending on a payload) to the terminating register file. Status transmitters (controlled through the command distribution network) provide means to synchronize responses to the PS requests using 6-bit IDs, they send up to 26-bit status information either in response to a command or automatically when the input data changes.

Memory interface

Memory interface is forked from an earlier eddr3 project (there are some important bug fixes). In addition to the physical layer components it includes sequencer that generates address and control signals for memory device access following the program data prepared in advance. This sequence programs come from one of the two sources – PS Sequence Memory written under the software control and PL Sequence Memory filled in by one one of the Sequence encoders just before (during previous memory transaction) the execution. Both memories are made of 4KB Block RAM modules. PS sequences are used for memory refresh access instructions, memory initialization and calibration, any other pre-programmed memory operations that need to be executed following specific timing.
Memory interface is configurable with Verilog `define macros and can interface up to 16 concurrent channels, each being read-only, write-only or bidirectional. Each channel is supposed to have a 4KB block RAM buffer (or two of them for bidirectional channels) configured in SDP (simple dual port) mode with 64-bit wide input (for memory read) or 64-bit output (for memory write). Memory interface also provides channels with clock and control signals for the memory side of the buffers, other side of these dual-port buffers is under channel logic control, it may be clocked by a different source. Two layers of registers may be inserted in both input (16:1) multiplexer path and output distribution of the 64-bit wide data buses that may need routing to different parts of the device.
Channel buffers are based on 4KB block RAM modules, each split into 4 of 1KB pages, making them suitable for up to 64 of 16-byte bursts transfers. Of the four pages one (in some overlapping tiles applications – two) is in use by the channel logic (being consumed or generated), another is used by the transfer to/from the DRAM memory, and the remaining ones provide needed buffering when memory is in use by the other channels.

Channel arbiter

16-channel arbiter accepts two levels of urgency (“want” and “need” signals) from the channel controllers. In most cases memory read channels generate “need” if there is at least one empty buffer page and the channel will need it later (not the last pages in a frame), “need” is generated when the channel is consuming the last available page. Similarly for the memory write channels – “want” is generated when there is at least one completed page, “need” – when there is no empty pages left. Channels that can wait for the data can skip raising the “need” signal leaving more resources to other channels that are tied to constant data rate data (such as inputs from the sensors).
In addition to the two levels of urgency (channels with ”need” requests are served before “want” ones compete) arbiter provides channel priorities. Each channel has associated counter that increments at each event (new request or request grant), taking care of the simultaneous requests by static priority by channel number. The channel having highest counter value wins, receives “grant” signal and that channel counter is reset to the specified channel priority value, so priority 0 makes that channel to wait maximal time.

Figure 6. Memory access arbitration and timing

Figure 6. Memory access arbitration and timing

Sequence generation takes less time than the actual memory access, and channel arbitration happens when the previous sequence data is sent for execution. Fig. 6 shows that channel 2 sequence is started to be transferred to the PL Sequence Memory as soon as the memory interface starts to execute sequence for channel 1.
There is an additional arbitration just before starting to execute a sequence – if refresh module (it does not need to transfer sequence data as it is already in the PS Sequence Memory) generates “want” or “need” request, it competes against the already granted channel that has the sequence ready to be executed – Fig. 6 shows how REFR sequence passes the CHN1 that is ready to be executed. The sequence FIFO in PL Sequence Memory allows only one sequence to be buffered. This limit is imposed to reduce waiting for service of the urgent (“need”) requests while not using a more complicated mechanism that would allow such requests to pass other channel non-urgent (“want”) requests in the sequence memory FIFO. It is still a possibility for the future improvements to allow efficient execution of significantly different size memory transfers.

Sequence Encoders

Sequence encoders are shared between the channels – the channel that wins the arbitration is granted to generate a memory access sequence. Currently there are 6 of such modules that generated scanline read, tile16 and tile32 (see Fig. 4a-b) and similar for memory writes. These modules accept address and size parameters from the window access controllers and use HDL-encoded templates to generate control sequence for the next memory access operation.

Channel window access controllers

Window access controllers implement access to selected rectangular areas inside the image frame. There are two types currently available – scanline access (Fig. 2) and tiled access (Fig.3). Distinction between read and write modes, and between tile16 and tile32 modes are passed as run-time parameters. They are used later to select the specific sequence encoder each time the request is granted by the arbiter. These modules require individual instances for each channel that uses them as they have to keep track of the related channel buffer, tile location and other module state variables.
Additional controllers will be developed for other types of accesses when needed by the image processing algorithms that may need other types of memory accesses. Example application may be a distortion correction procedure where either input or output use tiles that are not defined by a regular grid).

Memory access channels

Channel 0 is designed for programmable access to the memory. It uses PS Sequence Memory written through MAXI_GP0 under the software control. It has both read and write buffers for operations that involve data transfer, it is used for memory initialization and calibration/training, it can also be used to test other access sequences without re-generation of the bitstream.
Channel 1 implements a fast bi-directional bridge between the system memory and the dedicated image memory. On the system side it uses AXI_HP0 port in 64-bit mode, on the image memory side it implements a scanline window access It is possible to either fill the selected window in the image memory with the consecutive data from the system memory, or read image memory window  to a linear array in the system memory.
Channels 2-5 will be used to record data from the four sensor ports, currently one channel is connected to 2 buffers connected to the SAXI_GP0 interface for testing scanline windowed memory access.
Channels 6-9 will be used in tile32 mode to read 2d data for image compression. Temporary implementation uses 2 channels connected to SAXI_GP0 read/write for testing purposes.
Remaining six channels may be used for application-specific image processing.

Software tools used

The list of the tools used for this project is the same as listed for the earlier eddr3 project. The only difference is that now it is Eclipse Luna instead of Kepler, and some bugs in VDT plugin are fixed – bugs that revealed themselves while this plugin was being used with gradually growing code base.
The x393 project code itself is available under GNU GPLv3 Free Software license, does not depend on any undocumented or encrypted “IP” modules and can be simulated with the Free Software tools. Project configuration files allow importing it to Eclipse IDE when VDT plugin is installed.

by andrey at May 10, 2015 03:15 AM

May 06, 2015

Free Electrons

Free Electrons contributes U-Boot support for SECO i.MX6 uQ7 board

SECO i.MX6 uQ7 SOMAmongst the multiple customer projects we are currently working on that rely on i.MX6 based platforms, one of them is using the SECO i.MX6 µQ7 System on Module as its heart. Unfortunately, the SECO Linux BSP relies on old U-Boot and Linux kernel releases, which we didn’t want to use for this project.

Therefore, Free Electrons engineer Boris Brezillon has ported the mainline U-Boot bootloader on this platform, and contributed the corresponding patches. These patches have been merged, and the support for this platform is now part of the 2015.04 U-Boot release. To build it, simply use the secomx6quq7_defconfig configuration.

The work behind these patches was funded by ECA Group.

by Thomas Petazzoni at May 06, 2015 07:45 AM

May 05, 2015

Free Electrons

Free Electrons engineer Alexandre Belloni co-maintainer of the Linux RTC subsystem

SparkFun Real Time Clock ModuleThe Linux RTC subsystem supports the Real Time Clock drivers for a large number of platforms and I2C or SPI based Real Time Clocks: it contains about 140 different device drivers, plus the RTC core itself. The current maintainer, Alessandro Zummo, had unfortunately very little time to address all the patches that were sent, and many of them where usually handled by Andrew Morton, acting as a fallback for various parts of the kernel that are not enough actively maintained.

To address this lack of maintainer time, Free Electrons engineer Alexandre Belloni recently became a co-maintainer of the RTC subsystem, as can be seen in this patch to the MAINTAINERS file. Alexandre has already started his work by cleaning up the patchwork instance listing all the pending RTC patches, reducing the number of pending patches from 2843 to 436, actively applying new patches being posted, and reviving old patches that never got any attention.

Up to the 4.1 release included, RTC patches will flow to Linus Torvalds through Andrew Morton, but starting from Linux 4.2, Alexandre will start sending his pull requests directly to Linus.

by Thomas Petazzoni at May 05, 2015 12:28 PM

Video Circuits

London Alternative Photography Collective Talk on Optical Sound

I will be doing a talk later today about optical sound and the history of photography of sound, Sally Golding will also be showing some of her work!

Photo from Seeing Sound by Winston E.Kock

by Chris ( at May 05, 2015 08:31 AM

April 30, 2015

Video Circuits

Psyché Tropes + Video Circuits

Here are a few images from the night we did a while back and some video shot by Video Hack Space! thanks Fabrizio!

by Chris ( at April 30, 2015 08:10 AM

April 28, 2015

Bunnie Studios

Name that Ware April 2015

The Ware for April 2015 is shown below.

Have fun!

by bunnie at April 28, 2015 11:36 AM

Winner, Name that Ware March 2015

The Ware for March 2015 is a PC AT Single T4 4 Meg Transputer board assembly. Jim is the winner, congrats! Email me to claim your prize.

by bunnie at April 28, 2015 11:36 AM

April 25, 2015


FPGA to DDR3 memory interface: step-by-step timing calibration and set up

Working with the DDR3 Memory interface I was not able to avoid the temptation to investigate more the very useful feature of the modern FPGA devices – individually programmed input/output delay elements on all (or at least many) of its pins. This is needed to both prepare to increase the memory clock frequency and to be able to individually adjust the timing on other pads, such as the sensor ports, especially when switching from the parallel to high speed serial interface of the modern image sensors.

Xilinx Zynq device we are using has both input and output delays on all low-voltage pins used for the memory interface in the camera, but only input ones on the higher voltage range I/O banks. Luckily enough image sensors connected to these banks need just that – data rate to the sensors is much lower than the rate of the data they generate and send to the FPGA.

Adjusting memory timing with Python code

Adjustment of the optimal pin delays for the memory interface can be done in several ways, and many applications require that it should be either all implemented in hardware or use very limited CPU resources – that is the case when the memory to be set up is the main system memory and so CPU can not use it. On the other hand when the memory is connected to the FPGA part of the system that is already running with full software capabilities it is possible to use more elaborate algorithms.

I call it for myself “the Apple ][ principle” - do not use extra hardware for what can be done in software. In the case of the delay calibration for the memory interface it should be possible to use a reasonable model of the delay elements, perform measurements and calculate the parameters of such model, and finally calculate the optimal settings for each programmable component. Performing full measurements and performing parameter fitting can be a computationally intensive procedure (current Python implementation runs 10 minutes) but calculating the optimal settings from the parameters is very simple and fast. It is also reasonable to expect that individual parameters have simple dependence on the temperature so it will be easy to adjust parameters to the varying system temperature. Another benefit of such approach that it can use delay elements with even non-monotonic performance (that is sometimes in case when using FINEDELAY elements) and finally – the internal parameters of the delay elements do not depend on the clock frequency, so parameters can be measured at lower clock frequency and then settings can be re-calculated for the higher one. Adjusting timing parameters at the target frequency can be more difficult as there can be much smaller windows of the combination of the parameters that allow memory device to operate, it may be not possible to probe marginal values of some delays (to calculate the optimal center value) as it may violate other timing parameters.

The procedure described below can be used to measure the delay parameters of the memory interface and find the optimal combinations of the settings requiring no manual adjustments of the initial values. The software is written in Python and is a part of the Elphel GitHub repository x393 as x393/py393 PyDev project.

The Python code includes a module that can parse Verilog header files with parameter definitions so all the changes in the HDL code are automatically applied to the Python program, running the program on the target hardware generates updated values of the delay settings as a Verilog file, so these measured values can be used in simulation. This program is of course designed to run on the target platform, but most of the processing can be tested on a host computer - the project repository contains a set of measured data as a Python pickle file that can be loaded in the program with a command "load_mcntrl dbg/x393_mcntrl.pickle". Program can run automatically using the command file provided through the arguments, it also supports interactive mode. Most of the functions defined in the program modules are exposed to the program CLI, so it is possible to launch them, get basic usage help. Same is true for the Verilog parameters and macro defines - they are available for searching and it is possible to view their values.

Delay elements in the memory interface

Fig.1 Memory interface diagram showing signal paths and delays

Fig.1 Memory interface diagram showing signal paths and delays

There are total 61 programmable delays and a programmable phase shifter as a part of the clock management circuitry. Of these delays 57 are currently controlled – data mask signals are not used in this application (when needed they can be adjusted by the similar procedure as DQ output delays), ODT signal has more relaxed timing and the CKE (clock enable) is not combined with the other signals. There are 3 clock signals generated by the same clock management module with statically programmed delays: clk (same frequency as the memory clock), clk_div (half memory frequency) and mclk – also half frequency, but with 90 degree phase shift with respect to clk_div, it is driving the memory controller logic. Full list of the clock signals and their description is provided in the project.

Variable phase shifter (with the current 400 Mhz memory clock it has 112 steps per full clock period) is essentially providing variable phase clock driving the memory device, but to avoid dependence on the memory internal PLL circuitry, memory is driven by the non-adjusted clock, and programmed phase shift is applied to all other clock signals instead.

Address/control signals and data to be written to the memory device originate in the registers and Block RAM of the controller running at mclk global clock, then they go through serializers (OSERDESE2 for synthesis, OSERDESE1 for simulation to avoid undisclosed code modules). Serializers use two clocks and in this design the slower clk_div is ¾ of the mclk period later than mclk itself to guarantee positive setup time when crossing the clock boundary. Serializers for data, data mask and DQS strobes operate in DDR mode, while the ones for address and command signals use single data rate mode. Each of this signals pass through individual 32-tap delay with nominal 78 ps/step, followed by a a 5-tap fine delay element (ODELAYE2_FINEDELAY) and then go to the external memory device.

On the way back the data read from the memory and the read strobes (one per each data byte) pass through IDELAYE2_FINEDELAY elements and then strobes pass through BUFIO clock buffers that drive input clock ports of the deserializers ( ISERDESE2 for synthesis, ISERDESE1 for simulation), while the same (as used for the output serializers) clk and clk_div drive the system-synchronous ports. When crossing clock boundary to the mclk registers that receive data from the deserializers use the falling edge of mclk and there is again ¾ of mclk period to guarantee positive setup time.

The delay measurement procedure involves varying the delay that has uniform phase shift step (1/112 memory clock period) and adjustment of the variable “analog” pin delays that have some uncertainty: constant shift, scale (delay per step) and non-linearity. The measurement steps that require writing data to the memory and reading it back, and so depending on the periodic memory refresh, the automatic refresh is temporarily turned off when the clock phase and command delays are modified.

Measuring delays in the signal paths and setting memory interface timing

Step 1 : Finding valid command/address delays for each clock phase setting

The first thing to do to be able to operate the memory is to find the address/command line delay that is safe to use with each clock phase and/or find what values of the phase shift are valid. The address and command signals use single data rate (sampled at the leading edge of the clock by the memory device) so it is easier to satisfy the setup/hold requirements than for the data. DDR3 devices provide a special “write levelling” mode of operation that requires only clock, address/command lines and DQS output strobes providing result on the data bus. At this stage timing of the read data is not critical as the data data stay the same for the same DQS timing, and it is either 0×00 or 0×01 in each of the data bytes.

It is possible to try reading data in this mode (reading multiple data words and discarding groups of first and last one no remove dependence of read data timing) and if the result is neither 0×00 nor 0×01 then reset the memory, change the command delay (or phase) by say ¼ of the clock period, and start over again. If the result matches the write levelling pattern it is possible to find the marginal value value of the address delay by varying delay of address bit 7 when writing the Mode Register 1 (MR1) – this bit sets the write levelling mode, if it was 0 then the data bus will remain in high impedance state.

Fig.2 Finding the command/address lines delay for each clock phase

Fig.2 Finding the command/address lines delay for each clock phase

Memory controller drives address lines in “lazy” mode leaving them unchanged when they are not needed (during inactive NOP commands) so it is easier to check if A[7] low → high transition happens too late. Additionally the tested write levelling command have to be preceded by some other command with A[7] at low level.

Figure 2 shows the process of scanning over phases and finding the longest delay on A[7] line that still turns on the write levelling mode (shown with red diamonds). Command line delays are kept at zero until at phase 82 the delay on A[7] line becomes smaller than a preset limit (command lines are almost too late themselves), at this phase the command line delay is increased so the command is recognized in the next clock cycle and so the marginal value of A[7] is also increased by the full clock period. With the current settings the full delay range is almost exactly equal to the clock period, this will not be the case at higher memory clock rates (delays will cover more than a period) or increasing the delay calibration clock rate from 200 MHz to 300 MHz (delays will cover les than a period). On the Figure 2 there is a small gap (to phase=86) when the marginal delay for A[7] can not be measured as it would exceed the maximal delay value available in OSERDESE2 element.

Yellow triangles show the optimal values for the A[7] delay calculated by applying linear interpolation to the marginal values and shifting the result horizontally by ½ of the clock period (56 phase steps).

At this preliminary stage optimal command/address delays are assumed to be the same as for the A[7] – they are connected to the same I/O bank. Later it will be possible to optimize each signal delay individually, when switching to the higher frequency the relative differences between lines can be assumed the be the same and can be applied accordingly.

During the next stages of the delay measurement the command and address lines delay values are all set whenever the clock phase is changed.

Step 2: Measuring individual delays for command (RAS,CAS,WE) lines

Fig.3 Command lines delays measurement

Fig.3 Command lines delays measurement

When the approximate value for the optimal delay for the address/command lines is known it is possible to individually calibrate delay for the command lines. The mode register set command involves high (inactive) to low (active) state on all 3 of them, so it is possible to probe turning on the write levelling mode when 2 of the the 3 command lines (and all the bank and address lines) are set with the optimal values, while the delay on the remaining command line is varied. Sometimes this procedure leads to the memory entering undefined/non-operational state (write levelling pattern is not detected even after restoring known-good delay values), when such condition is detected, the program resets and re-initializes the memory device.

To increase the range of the usable phases the other command/address lines are kept at delay=0 while there still is a safe margin of the setup time with respect to memory clock (from phase = 32 to 60 on Fig. 3)

Step 3: Write levelling – finding the optimal DQS output delays for clock phase

Fig.4 DQS output delay measurement with write levelling mode

Fig.4 DQS output delay measurement with write levelling mode

This special mode of DDR3 devices operation is intended to adjust the DQS signal generated by the controller to the clock as seen by the memory device, it measures clock value at the leading edge of the DQS signals and replies with either 0×00 (clock was low) or 0×01 (clock was high) on each data byte of DQ signals.

Fig.5 Calculated optimal DQS output delays for each clock phase

Fig.5 Calculated optimal DQS output delays for each clock phase

The clock phase is scanned over the full period and for each phase the marginal (switching from 0×00 to 0×01) DQS output delay is measured for each of the byte lanes. This procedure directly results in the optimal values of the DQS output delay values, there is no need to shift them by a half-period. Fig. 5 shows the calculated by linear interpolation values of the DQS output delays for each phase. To increase the range of DQ vs. DQS delay measurements, the DQS output signals are allowed to slightly deviate from the optimal – Fig. 5 shows “early” and “late” branches and the amount of deviation.

The similar calculation is performed later once more time when additional data from co-measurement of DQ output delays and DQS output delays becomes available. At that stage it is possible to account for non-uniform fine delay steps of DQS output lines.

Step 4: Fixed pattern measurements

DDR3 memory devices have another special operational mode intended for timing set up that does not depend on actual data being written to the memory or read back. This is reading a predefined pattern from the device, currently only one pattern is defined – it is just alternating 0-1-0-1… on each of the data lines simultaneously. In this step the 11 of the 8-word bursts are read from the memory, then only the middle 8 bursts are processed, so there is no dependence on the (yet) wrong timing settings that result in the wrong synchronization of the data bursts. That provides 64 data words, half being in even (starting from 0) positions that are supposed to be zeros, and half in odd ones (should read all 1-s), and then total number of ones is calculated for each data bit for odd and even slots – 16 pairs of numbers in the range of zero to 32. These results depend on the difference between delays in the data and data strobe signal paths and allow detection of 4 different events in each data line: alignment of the leading edge on the DQ line to the leading edge of the DQS signal (as seen at the de-serializer inputs), trailing edge of the DQ to leading one of DQS and the same leading and trailing DQ to the trailing DQS. They are measured as transitions from 0 to 1 and from 1 to zero separately for even and odd data samples.

Fig.6 Measured (marginal) and calculated (optimal) DQ input delays vs. DQS input delays

Fig.6 Measured (marginal) and calculated (optimal) DQ input delays vs. DQS input delays

Most results have 0 or 32 values (all data words are read 0 or 1), but some provide intermediate “analog” results when corresponding words are read differently, depending on some uncontrolled factors. Later processing assumes that the difference from the middle value (16) is proportional to the difference between the measured (by the settings) delay value and the actual one. Additionally if the number of such analog samples is sufficient, it is possible to process only them and discard “binary” (all 0-s/all 1-s transitions).

This measurement can be made with any clock phase setting. Even as normally there is a certain relation between the phase and DQS delay (measured in the next step), wrong setting shifts read data by the full clock period or 2 bits for each DQ line, with 0-1-0-1 pattern there is no difference caused by such shift and we are discarding first and last data bursts where such shift could be noticed.

Figure 6 shows measured 4 variants for each data bit, ‘ir_*” for in-phase (DQ to DQS), DQ rising, “if” in-phase DQ falling, ‘or’ – opposite phase rising and ‘of’ – opposite phase falling. Only “analog” samples are kept. “E*” and “N*” show the calculated optimal DQ* delay for each DQS delay value. Calculation is performed with Levenberg-Marquardt algorithm with the delay model describe late in this article, the same program method is used both for input and output delays. The visible waves on the result curves are caused by the non-uniformity of the combined 32-tap main delays with the additional 5-tap fine delay elements, different amplitude of these waves is caused by the phase shift between the DQ and DQS lines (“phase” here is the fine delay (0..5) value – the full 0..159 delay modulo 5).

Step 5: Measuring DQS input delay vs. clock phase

Deserializers use both memory-synchronous clock (derived from DQS) and system-synchronous clk and clk_div, so there is a certain optimal phase shift between the two, allowing maximal deviation of the memory-synchronous input clock.

Fig.7 Measured (marginal) and calculated (optimal) DQS input delays vs. clock phase

Fig.7 Measured (marginal) and calculated (optimal) DQS input delays vs. clock phase

Data is crossing clock domains boundary at a single clock rate (2 bits at a time for each data line), so using fixed pattern of alternating 0-1-0-1… can not be used – regardless of the phase shift it will be the same “01” pair. For this reason we use actual read data command, not a special read pattern mode. Random data that is present in the memory array after power up can be used, but the program is writing a 0-0-1-1-0-0- 11… pattern for each data bit. This pattern will provide different di-bit value in each DQ line, even if the write DQ to DQS timing is not yet determined, so the actual data can be any of X-0-X-1-X- 0… where X can quasi-randomly be any of 0 or 1. The pattern is recorded once, then the data is read with different DQS input delays (DQ input delays are set according to step 4 results), comparing only the middle portion with the beginning/end discarded as before. The marginal DQS delay is detected as the value when the read data changes from the original value.

Figure 7 shows results of such measurements as well as the calculated optimal input delays for DQS lines. This calculation uses both Step 5 (DQS vs. phase) and Step 4 (DQ vs. DQS) measuremts and accounts for the fine delay non-uniformity.

Step 6: DQ to DQS output delays measurements

Fig.8 Measured (marginal) and calculated (optimal) DQ input delays vs. DQS output delays

Fig.8 Measured (marginal) and calculated (optimal) DQ input delays vs. DQS output delays

This measurement is performed similarly to step 4 when DQ to DQS input delays relation was probed with a fixed pattern readout mode. Now we already have known settings for the memory read operation and can rely on it to adjust write mode (output) delays. Alternating 0-1-0-1 sequence in every line similar to the pattern mode is recorded with various DQS output delay values, for each DQS delay appropriate phase and address/command delay values are used. Input delays (for DQS and DQ) are set for each phase using data from the previous steps and the data written with different DQ output delay is read back, then processed in the same way as in Step 4.

Figure 8 presents the relation between DQ and DQS output delays, and the result of combining Step 6 measurements with Step 3 (write levelling) – optimal DQ and DQS output delay values for different clock phase can be seen on Figure 9 that shows all the delays. Allowing some deviation from the DQS to clock alignment (this requirement is more relaxed than DQ-to-DQS delays) results into 2 alternative solutions for the same phase shift near phase=95, use of the higher memory clock rates will result in more of such multi-solution areas even without deviation from the optimal values.

Step 7: Measuring individual output delays for all address and bank lines

Having almost calibrated read and write memory operations it is now possible to set up output delays for each of the remaining address and bank lines (so far only A[7] was measured, other lines were just assumed to be the same). This measurement is done with writing some “good” pattern to a specific bank/row/column page (column address uses the low bits of the row address), and a “bad” data to all pages different by 1 of the address or bank bits. For this test the refresh sequence (it is loaded by the software, it is not hard-wired in the HDL code) was modified to provide specified data on the bank/address lines that is “don’t care” for this operation. These values are set to be inverted values to the “good” address, and the refresh command was manually requested before the read operation, making sure that the command will cause all the address/bank bits to be inverted.

All the phase values are scanned, for each phase the command and address delays are set to the optimal values as defined so far, and only one line at a time delay was modified to find the marginal value that causes the readout of the wrong data block.

This measurement is performed twice – fist with “good” address of all zeros, then – with all ones and results averaged for low → high and high → low address line transitions.

Step 8: Selecting valid parameter combinations for readout and write modes

Fig.9 All delays vs. clock phase

Fig.9 All delays vs. clock phase

Figure 9 combines all the data acquired so far as a function of the clock phase shift. Most of the delays do not change when the new bitstream is generated after the modification of the HDL code – the involved delays are defined by the fixed I/O circuitry and PCB/package routing. Only two of the signals involve FPGA fabric routes – DQS input signals that include BUFIO clock buffers, these buffers can be selected differently and routed differently by the tools. These signals also show the largest difference one the graph (two pairs of the green lines – solid and dashed).

There are additional requirements that are not shown on the Figure 9. DQ signals from the memory should arrive to the deserializer ¼ clock period earlier than the leading edge of the first DQS pulse, not 1 ¼ or not ¾ later – the measurements so far where made to the nearest clock period. Memory device generates exactly the required number of DQS transitions, so if the data arrives 1 clock too early, then the first two words will be lost, if it arrives 1 clock too late – the last two words will be lost.

Fig.10 All delays vs. clock phase, filtered to satisfy period-correct write/read conditions

Fig.10 All delays vs. clock phase, filtered to satisfy period-correct write/read conditions

For this final step the alternative variants of the setting that differ by the full clock periods are selected and tested. First the block with incremented (each word is the previous one plus 1) data is recorded and then the smaller block completely inside the recorded one and not using the first/last bursts is read back. The write mode is not yet set up, so the first/last recorded burst can not be trusted, but the middle ones should be recorded incrementally, so any differences from this pattern have to be caused by the incorrect readout settings.

After removing invalid parameter combinations defining the readout mode we can trust that the full block readout has all the words valid. Then we can do the same for the write mode and check which of the variants (if any) provide correct memory write operation. In the test case (one particular hardware sample and one clock frequency there was exactly one variant (as shown on the Figure 10) and the final settings can use the center of the range. With higher clock frequency several solutions may be possible – then other factors can be considered, such as trying to minimize the delays of the most timing-critical signals (DQ, DQS) to reduce dependence on the possible delay vs. temperature variations (not measured yet).

Model and parameters of the input/output delay elements

Processing of the measurement results in steps 4 and 6 involved using a delay model defined by a set of parameters and then finding the values of these parameters to best fit the measurement results.
Each data byte lane is independent from the other, so for each of the 4 groups (two for output and two for input) there are nine signals – one DQS and 8 DQ signals. Each delay consists of a 32-tap delay line with the datasheet delay of 78 ps per tap and a 5-tap delay with nominal 10 ps step. Our model represents each 32-tap delay as linear with tDQ[7:0] delays corresponding to a tap 0 and tSDQ[7:0], tSDQS – individual scale (measured in picoseconds per step). Fine delay steps turned out to be very non-uniform (in some cases even non-monotonic) so each of the 4 delay values (for 5-tap delay) is assigned an individual parameter – 4 for DQS (tFSDQS) and 32 for DQ (tFSDQ).

Procedure of measuring all 4 combinations of leading/trailing edges of the strobe and data makes it possible to calculate duty cycle for each of the 9 signals – tDQSHL (difference between time high and time low for the DQS signal) and eight tDQHL[7:0] for the similar differences for each of the data lines. Additional parameter was used to model the uncertainty of the measurement results (number of ones or zeros of the 32 samples) as a function of the delay difference from the center (corresponding to 50% of the zeros and ones). This parameter (anaScale in the program code) is measured in picoseconds and means how much the delay should be changed to switch form all 0 to all 1 (using simple piecewise linear approximation).

Parameter fitting is implemented using Levenberg-Marquardt algorithm, initial scale values use dataseeet data, initial delays are estimated using histograms of the acquired data (to separate data acquired with different integer number of clock cycles shift), other parameters are initialized to zeros. Below is a sample of the program output – algorithm converges rather quickly, getting to the remaining root mean square error (difference between the measured and modeled data) of about 10ps:
Before LMA (DQ lane 0): average(fx)= 40.929028ps, rms(fx)=68.575944ps
0: LMA_step SUCCESS average(fx)= -0.336785ps, rms(fx)=19.860737ps
1: LMA_step SUCCESS average(fx)= -0.588623ps, rms(fx)=11.372493ps
2: LMA_step SUCCESS average(fx)= -0.188890ps, rms(fx)=10.078727ps
3: LMA_step SUCCESS average(fx)= -0.050376ps, rms(fx)=9.963139ps
4: LMA_step SUCCESS average(fx)= -0.013543ps, rms(fx)=9.953569ps
5: LMA_step SUCCESS average(fx)= -0.003575ps, rms(fx)=9.952006ps
6: LMA_step SUCCESS average(fx)= -0.000679ps, rms(fx)=9.951826ps

Tables 1 and 2 summarize parameters of delay models for all input and data/strobe output signals. Of course these parameters do not describe the pure delay elements of the FPGA device, but a combination of these elements, I/O ports and PCB traces, delays in the DDR3 memory device. The BUFIO clock buffers and routing delays also contribute to the delays of the DQS input paths.

Table 1. Input delays model parameters
parameter number of values average min max max-min units
tDQSHL 2 4.67 -35.56 44.9 80.46 ps
tDQHL 16 -74.12 -128.03 -4.96 123.07 ps
tDQ 16 159.87 113.93 213.44 99.51 ps
tSDQS 2 77.98 75.36 80.59 5.23 ps/step
tSDQ 16 75.18 73 77 4 ps/step
tFSDQS 8 5.78 -1.01 9.88 10.89 ps/step
tFSDQ 64 6.73 -1.68 14.25 15.93 ps/step
anaScale 2 17.6 17.15 18.05 0.9 ps

Table 2. Output delays model parameters
parameter number of values average min max max-min units
tDQSHL 2 -114.44 -138.77 -90.1 48.66 ps
tDQHL 16 -23.62 -96.51 44.82 141.33 ps
tDQ 16 1236.69 1183 1281.92 98.92 ps
tSDQS 2 74.89 74.86 74.92 0.06 ps/step
tSDQ 16 75.42 69.26 77.22 7.96 ps/step
tFSDQS 8 6.16 2.1 11.32 9.22 ps/step
tFSDQ 64 6.94 0.19 19.81 19.63 ps/step
anaScale 2 8.18 5.38 10.97 5.59 ps

Features I would like to see improved in the future Xilinx devices

“Finedelay” 5-delay delay stage in IDELAY2 and ODELAY2 elements

I noticed the existence of these 5-tap delay elements in the utilization report of Xilinx Vivado tools – they do not seem to be documented in the Libraries Guide. I assume that the manufacturer was not very happy with their performance (the average measured value of the delay per tap turned out to be less than 7 ps so even the last tap output does not provide delay of the half of the 32-tap step, and non-uniformity of the delays makes it difficult to use in the simple hardware-based delay adjustment modules. But I like this option – it almost gives one extra bit of delay and as we are using software for delay calibration it is not a problem to have even a non-monotonic delay stage. So I would like to see this feature improved – added more taps to completely cover the full step of the coarse delay stage in the future devices, and have this nice feature documented, not hidden from the users.

Use of the internal voltage reference and the duty cycle correction

Internal reference voltage option was used in the tested circuitry because of the limited number of pins to implement a single-bank 16-bit wide memory interface, and the Xilinx datasheet limits memory clock to just 400 MHz for such configuration. Measurements show that there is a bias of -74.12ps on the duty cycle that may be caused by variation of the internal reference voltage, but the spread of the delays (123 ps) is still larger. Of course it is difficult to judge without having statistics on multiple units, but I suppose that the handicap of using internal reference is not that significant. And even 123ps is not that big as tDQHL was measured as a difference of duration high minus duration low, so if one transition edge is fixed, the other will have an error of just half of this value – less than a coarse (32-tap) delay when calibrated at 200 MHz (fine delay is possible to calibrate with 300MHz).

It would be nice to have at least a couple of bits in the delay primitives dedicated to the duty cycle correction of the delay elements that can be implemented as selective AND or OR the delay tap output with the previous one.

by andrey at April 25, 2015 12:24 AM

April 23, 2015


Kernel development for OpenEmbedded with Eclipse

Eclipse with C Development Tool (CDT) is a very powerful and feature-rich IDE for developing embedded Linux applications, such as Elphel393 camera. CDT includes CODAN — static code analysis tool which helps user to track possible problems in his code without compiling it, and Code Indexer, giving an auto-complete and code navigating (F3) features. They work independently from compiler, thus parsing the code in the same manner as compiler does is essential for producing meaningful results. As project grows, the interconnections between its parts tend to become more and more complicated, and maintaining the congruency of code processing for compiler and CODAN/Code Indexer becomes a non-trivial task. In the Internet, the most frequent recommendation for users who wish to develop Linux kernel with Eclipse is to disable CODAN feature since messy false error markers make it practically unusable. The situation becomes even worse for developers using external build tools (such as OpenEmbedded’s BitBake) as CODAN relies on output of a CDT-integrated build system to find correct way of code parsing. Anyway, embedded Linux applications usually involve kernel development, so we’ll try to find a practical approach to get the power of CODAN and Code Indexer into our hands.

Preparing the source code

I assume Poky image build environment is already set up. More info can be found here.

Main source of analysis errors are incorrect include paths, large number of unused source files which don’t contribute to build and break the index by redefining already defined symbols, and additional parameters that don’t present in a code and are transmitted to compiler via '-D' and '-include' flags. We can get all this data from build output. This will require a specific BitBake recipe and a parser script (the script is written in Python).

In Elphel, we use a specially arranged project tree for kernel development — it allows us to plug developed drivers and patches to any kernel used by BitBake with a number of symlinks. Two sets of symlinks allow BitBake to “see” developed source files while compiling the kernel and Eclipse to “see” the main kernel source code. To create this project tree, navigate to poky/ and run:

git clone

Required links are described in a kernel build recipe and created by BitBake during the ‘unpack’ task. Build is needed to produce all automatically generated header files.

. ./oe-init-build-env
bitbake linux-xlnx -c clean -f
bitbake linux-xlnx -c unpack -f
bitbake linux-xlnx -f

Setting up the Eclipse project
Created project tree already contains prepared project settings file (.cproject). If you’re interested in Linux development for Elphel393 camera, you can use it with a couple of easy initial steps described in If you’re interested in tuning your own project, I’ll give a summary of required settings in this blog.

Run Eclipse. Some additional heap memory may be required for indexing the kernel source:

./eclipse -vmargs -Xmx4G

  • File → New → C Project
    • Name = linux-elphel (this is hard-coded in a parser script so if you want to change it, edit the script as well)
    • Uncheck “Use default location”
    • Location = path to linux-elphel/ project directory
    • Project type = Makefile project → Empty Project
    • Toolchain = Linux GCC
    • [Next] → Advanced Settings (OK to overwrite)
  • C/C++ General → Preprocessor Include Paths → Entries → GNU C → CDT User Settings
    • [Add...] → Select “Preprocessor macros file” → linux/include/generated/autoconf.h → [OK]
    • [Add...] → Select “Preprocessor macros file” → linux/include/linux/compiler.h → [OK]
  • C/C++ General → Indexer
    • Check “Enable project specific setttings”
    • Check “Enable indexer”
    • Uncheck “Index source files not included in the build”
    • Uncheck “Index unused headers”
    • Check “Index header variants”
    • Uncheck “Index source and header files opened in editor”
    • Uncheck “Allow heuristic resolution of includes”
    • Set size of files to be skipped >100MB (effectively disabling this feature)
    • Uncheck all “Skip…” options
  • C/C++ General → Paths and symbols → Includes → GNU C → [Add...] → [Workspace] → /linux-elphel/linux/include → [OK] → [Ok]
  • C/C++ General → Paths and symbols → Source Location → [Add Folder...] → select linux/ → [OK]
    • In the same window delete default source location entry (/linux-elphel)
  • C/C++ General → Paths and symbols → Symbols → GNU C → [Add...] → Name=__GNUC__, value=4 → [OK]
  • C/C++ General → Preprocessor Include Paths → Providers → Uncheck all except CDT User Setting Entries and CDT Managed Build Setting Entries
  • [OK] to close Advanced Settings window → Finish.

The project is created. Close Eclipse for now.

Running the parser
You’ll need a modified recipe file and a parser script. To make BitBake output all the information required, add a variable assignment to the recipe:


Download the parser script into poky/build/ directory:

git clone

This script is heavily dependent on the project structure and has to be customized for your project. Feel free to ask if you have any questions about it. Build kernel with specific set of flags (it’ll take a while) and parse the output:

export _MAKEFLAGS="-s -w -j1 -B KCFLAGS='-v'"
bitbake linux-xlnx -c clean -f
bitbake linux-xlnx -c compile -v -f|python3 ./kernel-bitbake-parser/

The output consists of 4 sections — Define statements, Include paths, Source paths and Extra include files. First 3 of them are formatted as XML tags allowing to copy’n'paste them directly into respective nodes of a .cproject file. Script will attempt to automatically modify .cproject file as well. Extra includes have to be manually added from Eclipse. (C/C++ General → Preprocessor Include Paths → Entries → GNU C → CDT User Settings → [Add...] → Select “Include file” → Copy the path from parser output → [OK])

Run Eclipse:

./eclipse -vmargs -Xmx4G

Project → C/C++ Index → Rebuild.

The result is less than 0.005% of unresolved symbols (this can be seen from the Error Log, Window → Show view → Other… → Error Log) and no error markers from CODAN.

by Yuri Nenakhov at April 23, 2015 10:47 PM

April 18, 2015

Free Electrons

Linux 4.0 released, Free Electrons #6 contributing company

Linus Torvalds has released 4.0 a few days ago, deciding to increment the major number version just because he cannot count up to 20 with his fingers and toes. As usual, LWN gave an excellent coverage of the merge window for 4.0 (which at the time was expected to be called 3.20): first part, second part and third part. LWN also published an article with development statistics about the 4.0 cycle.

According to the LWN statistics, Free Electrons is the 6th contributing company in number of patches for the 4.0 cycle.

Here is in detail, all our commits to the Linux 4.0 release:

by Thomas Petazzoni at April 18, 2015 02:02 PM

April 15, 2015

Bunnie Studios

The Heirloom Laptop’s Custom Wood Composite

The following is an excerpt from a recent Novena backer update that just got published. I thought the tech bits, at least, might be interesting to a broader audience so I’m republishing them here:

With mainline laptop production finally humming along, bunnie was able to spend a week in Portland, Oregon working side by side with Kurt Mottweiler to hammer out all of the final open issues on the Heirloom devices.

We’re very excited about and proud of the way the Heirloom laptops are coming together. In a literal sense, Heirloom laptops are “grown” – important structural elements come from trees. While we could have taken the easy route and made every laptop identical, we felt it would be much more apropos of a bespoke product to make each one unique by picking the finest woods and matching their finish and color in a tasteful fashion. As a result, no two Heirloom laptops will look the same; each will be beautiful in its own unique way.

There’s a lot of science and engineering going into the Heirloom laptops. For starters, Kurt has created a unique composite material by layering cork, fiberglass, and wood. To help characterize the novel composite, some material samples were taken to the Center for Bits and Atoms at MIT, where Nadya Peek (who helped define the Peek Array) and Will Langford characterized the performance of the material. We took sections of the wood composite and performed a 3-point bend test using a Instron 4411 electromechanical material testing machine. From the test data, we were able to extract the flexural modulus and flexural strength of the material.

Heirloom composite material loaded into the testing machine

I’m not a mechanical engineer by training, so words like “modulus” and “specific strength” kind of go over my head. But Nadya was kind enough to lend me some insight into how to think about materials in this context. She pointed me at the Ashby chart, which like some xkcd comic panels, I could stare at for an hour and still not absorb all the information contained within.

For example, the Ashby chart above plots Young’s Modulus versus density of many materials. In short, the bottom left of the chart has bendy, light materials – like cork – and the top right of the chart has rigid, heavy materials, like Tungsten. For a laptop case, we want a material with the density of cork, but the stiffness of plastic. If you look at the chart, wood products occupy a space to the left of plastics, meaning they are less dense, but they have a problem: they are weak perpendicular to the grain, and so depending on the direction of the strain, they can be as yielding as polyethene (the stuff used to make plastic beverage bottles), or stiffer than polycarbonate (the stuff layered with glass to make bulletproof windows). Composite materials are great because they allow us to blend the characteristics of multiple materials to hit the desired characteristic; in this case, Kurt has blended cork, glass fiber, and wood.

The measurements of the Heirloom composite show a flexural strength of about 33 MPa, and a flexural modulus of about 2.2-3.2 GPa. The density of the material is 0.49 g/cm3, meaning it’s about half the density of ABS. Plotting these numbers on the Ashby chart shows that the Heirloom composite occupies a nice spot to the left of plastics, and provides a compromise on stiffness based on grain direction.

The red circle shows approximately where the Heirloom composite lands. To be fair, measurements still revealed some directional sensitivity to the composite; depending on the grain, the modulus varies from about 2.2GPa to 3.2 GPa (and the diameter of the red circle encompasses this variability); but this is a much tighter band than the 10x difference in modulus indicated for pure woods.

Another thing to note is that during testing, the material didn’t fail catastrophically. Above are the graphs of load vs. extension as plotted by the Instron testing machine. Even after bending the material past its peak load, it was still mostly intact and providing resistance. This result is a bit surprising; we had expected the material, like normal wood, would break in two once it failed. Furthermore, after we reset the test, the material bounced back to its original shape; even after bending by over 10mm, once the load was removed you could barely tell it went through testing. This high fracture toughness and resilience are desireable properties for a laptop case.

Of course, there’s nothing quite like picking up the material, feeling its surprising lightness, and then trying to give it a good bend and being surprised by its rigidity and ruggedness. The Heirloom backers will get the privilege of feeling this firsthand; for the rest of us, we’ll have to settle with seeing circles on Ashby charts and graphs on computer screens.

If you want to see more photos of the Heirloom laptop coming together, check out the image gallery at the bottom of the official Crowd Supply update!

by bunnie at April 15, 2015 02:26 AM

April 07, 2015

Free Electrons

Embedded Linux Conference slides from Free Electrons

Audience at ELC 2015The Free Electrons engineering team is back from a busy week at the Embedded Linux Conference 2015 in San Jose, California, last week. During this conference, we presented several talks, a BoF, and participated to the technical showcase with a Buildroot related demo:

  • Maxime Ripard gave a presentation about the DMAengine subsystem, and his slides are available as PDF.
  • Thomas Petazzoni gave a talk about The Device Tree as a stable ABI: a fairy tale?, and the slides are available as PDF.
  • Boris Brezillon gave a talk about MLC/TLC NAND support: (new ?) challenges for the MTD/NAND subsystem, the slides are available as PDF.

Our three talks were all given in front of fully packed rooms, even with a number of people standing in the room for some of them! We were glad to see that the topics we proposed did interest the ELC audience.

Boris Brezillon about support for MLC NAND in MTD

Boris Brezillon about support for MLC NAND in MTD

Thomas Petazzoni about Device Tree bindings as a stable ABI

Thomas Petazzoni about Device Tree bindings as a stable ABI. Photo by Drew Fustini.

In addition to the talk, Thomas Petazzoni organized on Tuesday last week a BoF (Birds of a feather) session on Buildroot, during which approximately 15 persons showed up even though it wasn’t announced in the official schedule. This session was useful to get some feedback from Buildroot users, and meet users and developers in person.

Finally, on Tuesday evening, during the technical show-case, we demo-ed the Buildroot capabilities using a setup that consisted in two platforms running Buildroot-generated systems: a Raspberry Pi 2 system that runs the Kodi media player software, and a Marvell Armada XP based OpenBlocks AX3 system that runs as a NAS providing contents for the media player. This demo was prepared by Buildroot contributor Yann E. Morin, and Free Electrons engineer Thomas Petazzoni. The poster presented is available as PDF or SVG, and all the instructions to rebuild the two systems are documented at

Buildroot demonstration at ELC 2015 technical show case

Buildroot demonstration at ELC 2015 technical show case

Buildroot demonstration at ELC 2015 technical show case

Buildroot demonstration at ELC 2015 technical show case

In addition, it is worth mentioning that all the slides from the Embedded Linux Conference are available at and The talks have been video recorded by the Linux Foundation, and hopefully unlike to what happened to the ELCE 2014 videos, the ELC 2015 videos will really appear online at some point in the future.

The location of the next Embedded Linux Conference was also announced, and it will take place in San Diego next year. It is the first time that the Embedded Linux Conference US edition moves outside of the Silicon Valley!

by Thomas Petazzoni at April 07, 2015 02:50 AM

April 04, 2015


Trying out KiCAD




Teardrops in KiCAD

We, at Elphel, are currently using proprietary software for schematic and PCB development and thus are not able to provide our customers with the “real” source files of our designs – pdf and gerber files only. Being free software and open hardware oriented company we would like to replace this software with open source analogues but were not able to accomplish this due to various limitations and inconveniences in design work-flow. We follow the progress in such projects as gEDA and KiCAD and made another attempt to use one them in our work. KiCAD seems to be the most promising design suite considering recent CERN contribution and active community support. I tried to design a simple element, a flexible printed circuit cable, using KiCAD and found out that the PCB design program lacks such useful feature as teardrops.

What are teardrops

Teardrops are often used to create mechanically stronger connections between tracks and pads/vias to prevent drill breakout during board manufacturing. This is particularly valuable when the design objects are small, as it was in my case. The figures below explain the problem:


Fig. 1


Fig. 2


Fig. 3


Fig. 4

Fig. 1 shows perfectly aligned drill hole but the final result (as on Fig. 2) can be far from perfect because of drill tool wandering or board stack misalignment during manufacturing. Relaxing specification or allowing drill breakout along the hole perimeter, as on Fig. 4, is not always possible. Adding teardrops (Fig. 3) in such cases is a good option.
The images below show misaligned drill holes on manufactured PCBs:

electronic circuit board

Adding new feature

The great advantage of any open source project is the possibility to add any required feature or fix bugs on your our. I cloned KiCAD repository and dived into the source code trying to add mock up implementation of teardrops. It took some time to get acquainted with class hierarchies and internal structures. Finally, I added new option to “Tools” menu which adds teardrops to currently selected track. Two types of teardrops are implemented by the moment: curved (github link) and straight (github link). The process of selection and results are shown on the screenshots:



The straight teardrops are composed of two segments connecting tracks and vias. The curved teardrops are actually approximated with several short segments as KiCAD does not allow to place arcs on copper layers. There are several intentional limitations in current implementation:

  • teardrops are created for vias only
  • DRC rules are not taken into consideration during calculations
  • the ends of selected track must coincide with via center
  • no user adjustable settings

These limitations are caused by test nature of my source code and at the same time they define the fields of further development. The result obtained is good enough to be used in real applications.


by Mikhail Karpenko at April 04, 2015 02:17 AM

March 30, 2015

Free Electrons

Linux 3.19 released, overview of Free Electrons contributions

It’s been a while that Linus Torvalds has released Linux 3.19 and we already know that the next version of Linux will be called 4.0. It’s not too late though to learn more about the 3.19 release, by reading the following three LWN articles: part 1, part 2 and part 3. KernelNewbies has also updated its page about 3.19.

In terms of statistics for the 3.19 release cycle, LWN has published an article which ranks Free Electrons the 13th contributing company, with 205 patches merged. We have been in the top 30th contributing company by number of patches for all kernel releases since Linux 3.8, a sign of our continuous involvement in the upstream kernel community.

Our most important contributions in this kernel release are:

  • For the Atmel ARM processors, numerous cleanup patches from Alexandre Belloni to prepare the platform for ARM multiplatform compliance (the possibility of building the support for Atmel ARM processors together with the support of other ARM processors in a single kernel image). From Boris Brezillon, addition of Device Tree support in the AT91 RTC driver, improvements to the AT91 irqchip driver, addition of a PWM driver for the PWM built into the Atmel HLCDC display controller, addition of Device Tree support for the AT91 hardware random number generator driver, addition of an MFD driver for the Atmel HLCDC display controller, and many other Device Tree fixes and improvements.
  • For the Marvell Berlin ARM processors, addition of USB, SATA and reset controller support. The USB support required numerous core improvements to the USB subsystem, and the addition of a specific USB PHY driver.
  • For the Marvell EBU ARM processors, Gregory Clement added USB PHY support for Armada 375, and CPU hotplug support for Armada 38x as well as several other fixes and improvements. Thomas Petazzoni added suspend to RAM support for Armada XP, fixed a serious problem in the I2C driver that required some major refactoring, and did some HW I/O coherency related fixes.
  • For the Allwinner ARM processors, Maxime Ripard did the relicensing of many Device Tree files from GPL only to GPL+X11 licenses. He also added pinctrl support on Allwinner A80.
  • After writing a dmaengine driver which was merged in 3.17, Maxime Ripard started to get involved in the dmaengine subsystem itself. He contributed a documentation for this subsystem, which was merged in Linux 3.19, as well as several fixes for dmaengine drivers.
  • Addition of a generic linux/media-bus-format.h header file, containing definitions of the various possible pixel formats. This header file was until then specific to the Video4Linux subsystem, but will start being used by the DRM/KMS subsystem. This addition was done in preparation of the introduction of a DRM/KMS driver for the AT91 HLCDC display controller (to come in Linux 4.0).
  • A few small improvements to the core DRM/KMS subsystem, also preparation work for the AT91 HLCDC display controller driver.
  • Fixes for the i.MX28 NAND flash controller driver, the gpmi-nand to properly support the raw access operations, which allow to use the userspace MTD testing utilities to validate the MTD setup. This was part of a customer project we did to assess the quality of the MTD and UBI setup on a Freescale i.MX28 custom platform.

The details of our contributions are:

by Thomas Petazzoni at March 30, 2015 03:25 PM

March 27, 2015


Nibble running on a Pi 2 with 2.2″ TFT

The speed bump of the Pi 2 means that will run at similar speeds to the Banana Pi. I got of a 2.2″ TFT and soldered it onto the Pi and made a quick video. The only additional hardware used is a small USB microphone.

by john at March 27, 2015 07:01 PM

March 25, 2015

LZX Industries

Triple Video Multimode Filter 2Q 2015 Restock Orders

Triple Video Multimode Filter, domestic postage paid, $450
Triple Video Multimode Filter, international postage paid, $465

Everyone responded well to the Video Waveform Generator run we are manufacturing right now, so as we continue to use up our internal stocks of thru-hole parts I want to get another run going for Triple Video Multimode Filter. These will be handbuilt locally by my tech and myself while Darkplace Manufacturing handles new module production and currently the Color TBC restock.

Video Waveform Generator production is going smoothly and will be finished on schedule, then we’ll move into this module in late April.

Units start shipping May 15th. We are running behind on shipments at this point, and new order lead time is 6-8 weeks.

by Lars Larsen at March 25, 2015 09:21 PM

Video Circuits

F & S Themerson

Here is a great early visual music film by Franciszka and Stefan Themerson from 1944/45 The Eye and the Ear.

by Chris ( at March 25, 2015 10:57 AM

March 24, 2015

Video Circuits

Psyché Tropes & Video Circuits present an Evening of Modular Synthesis April the 4th

So I got together with Psyché Tropes to put on an event at apiary studios on the 4th of April, it should be interesting and mostly modular! There will be me and two of my favourite video syntheists doing video and three great musicians doing audio, hopefully with some cross patching for added fun and audio reactivity.

Psyché Tropes and Video Circuits present an Evening of Modular Synthesis at Apiary Studios on Saturday 4th April 2015.
£6 adv / £8 door. No guest list. 8pm – Late.

Psyché Tropes is a label from the creator of Hackney Film Festival dedicated to exploring the synaesthetic intersections between sound and its visual counterpart. The label’s second live edition sees a collaborative event with Video Circuits combining the realms of audio and video modular synthesis with a very special line-up. Video Circuits is a blog and research project that takes a wide ranging approach to documenting the early years of electronic video art, visual music and computer art with a view to informing the current output of contemporary artists and musicians working in the field.

Live modular shows by


Live video modular by


with Psyché Tropes djs

British artist Robin Rimbaud traverses the experimental terrain between sound, space, image and form, creating absorbing, multi-layered sound pieces that twist technology in unconventional ways. From his early controversial work using found mobile phone conversations, through to his focus on trawling the hidden noise of the modern metropolis as the symbol of the place where hidden meanings and missed contacts emerge, his restless explorations have won him international admiration from amongst others, Bjork, Aphex Twin and Stockhausen.

Audio Dependent is the electronic music alias of audio visual artist, Tim Cowie. A long-time collaborator at The Light Surgeons, Cowie’s music and sound design work crosses over from electronica and techno, to contemporary classical and ambient soundscapes. He also releases music under his own name and as part of the duo Infinite Particles.

The Asterism is a semi-improvised solo modular synth and electronics project from Mark O Pilkington (Raagnagrok / Urthona / Teleplasmiste), constructed around organic rhythmic developments and blurring the lines between natural and unnatural sounds.

Alexander Peverett (b.1976) is a multi-disciplinary artist from Wigan, England. He resides in Japan and the United Kingdom. His personal work explores the fields of electronic & computer music, video art, multi-media installation, generative art and computer graphics. He works as a freelance Art Director, Computer Artist, Sound Designer and Video Director.

Synthpunk is a musician, engineer and video synthesist. His wide ranging skills and interests manifest themselves in many different ways one of which is his self-built modular video synthesizer.

Chris King is an audio visual artist who primarily works with drawing, experimental animation, electronics and sound.

More info at:

by Chris ( at March 24, 2015 07:14 AM

Photos from Sabrina Ratté & Roger Tellier-Craig

I got to see Sabrina and Roger's amazing performance at the BFI on the 13th, here are some awful photos of Sabrina's beautiful imagery and a video Rosa took of their fantastic performance at Sonic Acts 2015 so you can also hear the incredible sounds too, even if it is a different set. I belive sabrina is also giving a talk soon to, info here

by Chris ( at March 24, 2015 07:09 AM

Ewa Justka

I found some photos from the semi regular EAVI  night from November where Ewa played which reminded me to post up some of her work and a link to her great site. She often pefromes with hacked CRT televisions and Optoelectronic noise circuits, so there is a great synchronicity of sound and image.

by Chris ( at March 24, 2015 06:33 AM

March 20, 2015


Samsung SuperAMOLED : weekend die-shot

Updated March 20, 2015: Thanks to lucky(?) accident and new lens we managed to take much better photos of Samsung SuperAMOLED display:

October 17, 2013: Samsung's SuperAMOLED display from Galaxy S4 mini is supposed to have active matrix (i.e. control transistors are on substrate) and integrated touch sensor. Let's take a look: It seems there are at least 2 levels of barely visible interconnect (ITO?).

With few pixels glowing:

Only pixels glowing:

Half-pitch and thinnest lines are 2.5 µm. Diagonal die size is 109 mm :-)

March 20, 2015 02:32 PM

March 17, 2015

Bunnie Studios

Name that Ware, March 2015

The Ware for March 2015 is shown below.

Thanks to Dale Grover for sharing this ware! I had read about this one as a lad, but never laid hands on one…

by bunnie at March 17, 2015 04:40 PM

Winner, Name that Ware February 2015

The Ware for February 2015 is a logic board from an HP 16600 series logic analyzer. Megabytephreak is the winner, thanks for the clear analysis and also helping answer other reader’s questions about the metal fill for etch concentration normalization!

by bunnie at March 17, 2015 04:40 PM

March 13, 2015


Aaaaand we’re back

back in stock

I’m happy to announce that the osPID is now officially back in stock at Rocket Scream. There have been some improvements to the hardware (usb micro instead of mini, louder buzzer, etc,) but mostly this was about getting a more reliable supply chain in place.

Thanks to everyone for being so patient during this dry spell. The changes we’ve made should insure that moving forward, when you want an osPID you won’t have to wait.

by Brett at March 13, 2015 01:31 PM

March 09, 2015

Video Circuits

Le Révélateur: Sabrina Ratté & Roger Tellier-Craig London

So I am pretty excited to see Le Révélateur perform as part of Digital Québec, I am a great appreciator of all their work so far, both Sabrina and Rodger make some of the most beautiful work around. Link!


BFI Southbank’s regular Sonic Cinema strand teams up with ELEKTRA and MUTEK, two vibrant organisations from Québec, to present a bold and ambitious series of live audiovisual performances, featuring 8 UK Premieres over two consecutive nights.

Dubbed Digital Québec, some of the province’s most innovative and experimental A/V creators will be presenting their work for the very first time on British soil. Acting as a continuation of the combined ELEKTRA and MUTEK 15th anniversary event EM15 presented in Montréal in May 2014, the selection of works represent the interdisciplinary intersection of music, sound and digital art where both organizations meet.


Thursday March 12
Dominique T Skoltz: y2o
Yan Breuleux: Tempêtes

Matthew Biederman & 4X: Physical
Herman Kolgen: Seismik + Aftershock


Friday March 13
Maotik & Metametric: Omnis
Woulg: Ring Buffer

Myriam Bleau: Soft Revolvers
Roger Tellier-Craig & Sabrina Ratté: Le Révélateur


Tickets available here:

1 show £16
2 shows the same night £25
Pass for 4 shows £40

If you wish to buy a 4-performance or 8-perfomance pass, you need to call the BFI box office or visit the venue to buy the festival passes.

With the support of Conseil des arts et des lettres du Québec, ministère de la Culture et des Communications du Québec and the Québec Government Office in London.

by Chris ( at March 09, 2015 12:44 PM

Dean Winkler

Here is a new work from Dean Winkler who has provided me with some of my favourite posts with uploads of his past work.

"An abstract meditation on global warming set to music by Low City. 1980s style analog video layering, created with modern desktop tools."
Check out his Vimeo channel for more fantastic work.

by Chris ( at March 09, 2015 06:16 AM

March 03, 2015

Richard Hughes, ColorHug

Updating Firmware on Linux

A few weeks ago Christian asked me to help with the firmware update task that a couple of people at Red Hat have been working on for the last few months. Peter has got fwupdate to the point where we can “upload” sample .cap files onto the flash chips, but this isn’t particularly safe, or easy to do. What we want for Fedora and RHEL is to be able to either install a .rpm file for a BIOS update (if the firmware is re-distributable), or to get notified about it in GNOME Software where it can be downloaded from the upstream vendor. If we’re showing it in a UI, we also want some well written update descriptions, telling the user about what’s fixed in the firmware update and why they should update. Above all else, we want to be able to update firmware safely offline without causing any damage to the system.

So, lets back up a bit. What do we actually need? A binary firmware blob isn’t so useful, and so Microsoft have decided we should all package it up in a .cab file (a bit like a .zip file) along with a .inf file that describes the update in more detail. Parsing .inf files isn’t so hard in Linux as we can fix them up to be valid and open them as a standard key file. The .inf file gives us the hardware ID of what the firmware is referring to, as well as a vendor and a short (!) update description. So far the update descriptions have been less than awesome “update firmware” so we also need some way of fixing up the update descriptions to be suitable to show the user.

AppStream, again, to the rescue. I’m going to ask nice upstreams like Intel and the weird guy who does ColorHug to start shipping a MetaInfo file alongside the .inf file in the firmware .cab file. This means we can have fully localized update descriptions, along with all the usual things you’d expect from an update, e.g. the upstream vendor, the licensing information, etc. Of course, a lot of vendors are not going to care about good descriptions, and won’t be interested in shipping another 16k file in the update just for Linux users. For that, we can actually “inject” a replacement MetaInfo file when we curate the AppStream metadata. This allows us to download all the .cab files we care about, but are not allowed to redistribute, run the appstream-builder on them, then package up just the XML metadata which can be consumed by pretty much any distribution. Ideally vendors would do this long term, bu you need got master versions of basically everything to generate the file, so it’s somewhat of a big ask at the moment.

So, we’ve now got a big blob of metadata we can read in GNOME Software, and show to Fedora users. We can show it in the updates panel, just like a normal update, we just can’t do anything with it. We also don’t know if the firmware update we know about is valid for the hardware we’re running on. These are both solved by the new fwupd project that I’ve been hacking on for a few days. This exit-on-idle daemon allows normal users to apply firmware to devices (with appropriate PolicyKit checks, typically the root password) in a safe way. We check the .cab file is valid, is for the right hardware, and then apply the update to be flashed on next reboot.

A lot of people don’t have UEFI hardware that’s capable of using capsule firmware updates, so I’ve also added a ColorHug provider, which predictably also lets you update the firmware on your ColorHug device. It’s a lot lower risk testing all this super-new code with a £20 EEPROM device than your nice shiny expensive prototype hardware from Intel.

At the moment there’s not a lot to test, we still need to connect up the low level fwupdate code with the fwupd provider, but that will be a lot easier when we get all the prerequisites into Fedora. What’s left to do now is to write a plugin for GNOME Software so it can communicate with fwupd, and to write the required hooks so we can get the firmware upgrade status as a notification for boot+2. I’m also happy to accept patches for other hardware that supports updates, although the internal API isn’t 100% stable yet. This is probably quite interesting for phones and tablets, so I’d be really happy if this gets used on other non-Fedora, or non-desktop usecases.

Comments welcome. No screenshots yet, but coming soon.

by hughsie at March 03, 2015 07:29 PM

March 02, 2015

Video Circuits

C.E. Burnett

A recent article on Paleofuture brought to my attention the work of C.E. Burnett an engineer at RCA's  research lab during the 1930s. This must be a very early example of cathode ray tubes being used for pattern generation and certainly predates many more well known media art pioneers. I wonder if he ever spoke to Ben F. Laposky another early pioneer in a related technique.


by Chris ( at March 02, 2015 10:42 AM

Zoran Radovic

Zoran Radovic has been working with pendulums, plotters, lasers and CRT displays since the 1960s. A recent post by an online friend highlighted the fact that his studio is being sold off, I noticed some plotter art on the wall which lead me to check out his great site detailing his fantastic work. he details his various systems building on his ideas and demonstrating how some of his systems work. Please go read his site for more info! 

Here are a few more links with some of his work

by Chris ( at March 02, 2015 08:16 AM

February 28, 2015

Bunnie Studios

Name that Ware, February 2015

The Ware for February 2015 is shown below.

Eep! I’m late! I blame Chinese New Year.

This one was a tough one to crop: too much makes it too obvious, too little makes it impossible to guess. However, I’m betting that someone out there could probably recognize this ware even if I downsampled all of the part numbers and manufacturer’s logos.

Thanks again to dmo for sharing this ware. I’ll miss visiting your lab!

by bunnie at February 28, 2015 06:02 PM

Winner, Name that Ware January 2015

Judging this one was tough. There were a lot of perfectly good guesses (and some pretty hilarious ones :), but because the advertised purpose of this ware is so weird, sound engineering reasoning need not apply.

What I’m told is that you install this on an electric bike to prevent the motor from burning out. I….don’t really think that’s effective, nor do I really believe it. At the very least, stacking capacitors like this while connecting them with thin copper traces to a terminal block and then wiring them with a long pair of wires to a battery seems to nullify any benefit of equalizing the ESR of capacitors by using a banked array of different values.

Although I think Jeff’s explanation (use as a power filtering cap in car audio) is a much more likely reason…I liked ingo’s thought process in reviewing the ware — knowledgeable, yet skeptical. So I’ll declare ingo as the winner…congrats, email me for your prize!

by bunnie at February 28, 2015 06:01 PM

February 26, 2015

Video Circuits

New Australian Experimental Video Resource

So a while back I heard rumblings about this and have been meaning to post this for a while, Stephen Jones (one of the pioneers of media art in Aus) designed  the Supernova 12, a system owned by Jeffrey Siedler who has very kindly donated it to a new experimental video lab set up By Tom Ellard (of severed heads) and Ant Banister at the University of New South Wales, Other Stephen Jones machines have also been resurrected and integrated with some newer devices and at some point in the future hopefully we will see some new work! I believe Ed and Liz of LZX and Pia VanGelder are also lending a hand  (thanks to Ant for the Photos) and Tom for his blog posts

by Chris ( at February 26, 2015 02:20 PM

February 23, 2015


Nordic NRF24L01+ - real vs fake : weekend die-shot

Nordic NRF24L01+ (NRF24L01P) is a very popular 2.4Ghz transceiver used in countless consumer products. Not surprising that we've came across it's fake.

Genuine: This chip was extracted from "expensive" (~10$) RF module with additional RF amplifier chip:

Die size - 1876x1761 µm, 250nm technology.

Nordic logo on the die:

Compatible/counterfeit: This chip was extracted from "cheap" RF module (1$):

Die size - 2014x1966 µm, 350nm technology.


Chip marking is similar, though not identical (genuine one on the right):

Fake chip has quite thin marking. Also, text jumps significantly from chip to chip (point stays in place).

Despite there were no functional differences reported (yet), one could expect that 350nm compatible chip will have slightly higher power consumption and slightly lower sensitivity. If only this chip was marked properly (Like SI24R1 - one of compatible chips) as compatible - that would have been totally legitimate business. But currently designers, manufacturers and end users are mis-leaded.

February 23, 2015 04:10 AM

February 13, 2015

Richard Hughes, ColorHug

16F1454 RA4 input only

To save someone else a wasted evening, RA4 on the MicroChip PIC 16F1454 is an input-only pin, not I/O like stated in the datasheet. In other news, I’ve prototyped the ColorHug ALS on a breadboard (which, it turns out was a good idea!) and the PCB is now even smaller. 12x19mm is about as small as I can go…

by hughsie at February 13, 2015 09:44 AM

February 08, 2015

Bunnie Studios

A Tale of Two Zippers

Recently, Akiba took me to visit his friend’s zipper factory. I love visiting factories: no matter how simple the product, I learn something new.

This factory is a highly-automated, vertically-integrated manufacturer. To give you an idea of what that means, they take this:

Ingots of 93% zinc, 7% aluminum alloy; approx 1 ton shown

and this:

Compressed sawdust pellets, used to fuel the ingot smelter

and this:

Rice, used to feed the workers

And turn it into this:

Finished puller+slider assemblies

In between the input material and the output product is a fully automated die casting line, a set of tumblers and vibrating pots to release and polish the zippers, and a set of machines to de-burr and join the puller to the slider. I think I counted less than a dozen employees in the facility, and I’m guessing their capacity well exceeds a million zippers a month.

I find vibrapots mesmerizing. I actually don’t know if that’s what they are called — I just call them that (I figure within minutes of this going up, a comment will appear informing me of their proper name). The video below shows these miracles at work. It looks as if the sliders and pullers are lining themselves up in the right orientation by magic, falling into a rail, and being pressed together into that familiar zipper form, in a single fully automated machine.

720p version

If you put your hand in the pot, you’ll find there’s no stirrer to cause the motion that you see; you’ll just feel a strong vibration. If you relax your hand, you’ll find it starting to move along with all the other items in the pot. The entire pot is vibrating in a biased fashion, such that the items inside tend to move in a circular motion. This pushes them onto a set of rails which are shaped to take advantage of asymmetries in the object to allow only the objects that happen to jump on the rail in the correct orientation through to the next stage.

Despite the high level of automation in this factory, many of the workers I saw were performing this one operation:

720p version

This begs the question of why is it that some zippers have fully automated assembly procesess, whereas others are semi-automatic?

The answer, it turns out, is very subtle, and it boils down to this:

I’ve added red arrows to highlight the key difference between the zippers. This tiny tab, barely visible, is the difference between full automation and a human having to join millions of sliders and pullers together. To understand why, let’s review one critical step in the vibrapot operation.

We paused the vibrapot responsible for sorting the pullers into the correct orientation for the fully automatic process, so I could take a photo of the key step:

As you can see, when the pullers come around the rail, their orientation is random: some are facing right, some facing left. But the joining operation must only insert the slider into the smaller of the two holes. The tiny tab, highlighted above, allows gravity to cause all the pullers to hang in the same direction as they fall into a rail toward the left.

The semi-automated zipper design doesn’t have this tab; as a result, the design is too symmetric for a vibrapot to align the puller. I asked the factory owner if adding the tiny tab would save this labor, and he said absolutely.

At this point, it seems blindingly obvious to me that all zippers should have this tiny tab, but the zipper’s designer wouldn’t have it. Even though the tab is very small, a user can feel the subtle bumps, and it’s perceived as a defect in the design. As a result, the designer insists upon a perfectly smooth tab which accordingly has no feature to easily and reliably allow for automatic orientation.

I’d like to imagine that most people, after watching a person join pullers to sliders for a couple minutes, will be quite alright to suffer the tiny bump on the tip of their zipper to save another human the fate of having to manually align pullers into sliders for 8 hours a day. I suppose alternately, an engineer could spend countless hours trying to design a more complex method for aligning the pullers and sliders, but (a) the zipper’s customer probably wouldn’t pay for that effort and (b) it’s probably net cheaper to pay unskilled labor to manually perform the sorting. They’ve already automated everything else in this factory, so I figure they’ve thought long and hard about this problem, too. My guess is that robots are expensive to build and maintain; people are self-replicating and largely self-maintaining. Remember that third input to the factory, “rice”? Any robot’s spare parts have to be cheaper than rice to earn a place on this factory’s floor.

However, in reality, it’s by far too much effort to explain this to end customers; and in fact quite the opposite happens in the market. Because of the extra labor involved in putting these together, the zippers cost more; therefore they tend to end up in high-end products. This further enforces the notion that really smooth zippers with no tiny tab on them must be the result of quality control and attention to detail.

My world is full of small frustrations similar to this. For example, most customers perceive plastics with a mirror-finish to be of a higher quality than those with a satin finish. While functionally there is no difference in the plastic’s structural performance, it takes a lot more effort to make something with a mirror-finish. The injection molding tools must be painstakingly and meticulously polished, and at every step in the factory, workers must wear white gloves; mountains of plastic are scrapped for hairline defects, and extra films of plastic are placed over mirror surfaces to protect them during shipping.

For all that effort, for all that waste, what’s the first thing a user does? Put their dirty fingerprints all over the mirror finish. Within a minute of coming out of the box, all that effort is undone. Or worse yet, they leave the protective film on, resulting in a net worse cosmetic effect than a satin finish. Contrast this to a satin finish. Satin finishes don’t require protective films, are easier to handle, last longer, and have much better yields. In the user’s hands, they hide small scratches, fingerprints, and bits of dust. Arguably, the satin finish offers a better long-term customer experience than the mirror finish.

But that mirror finish sure does look pretty in photographs and showroom displays!

by bunnie at February 08, 2015 08:43 PM

Altus Metrum

keithp&#x27;s rocket blog: Altos1.6

AltOS 1.6 — TeleDongle v3.0 support and bug fixes

Bdale and I are pleased to announce the release of AltOS version 1.6.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, STMF042, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a major release of AltOS, including support for our new TeleDongle v3.0 board and a selection of bug fixes

AltOS Firmware — TeleDongle v3.0 added along with some fixes

Our updated ground station, TeleDongle v3.0, works just like the original TeleDongle, but is an all-new design:

  • CC1200 radio chip is about 5dB more sensitive than TeleDongle's CC1111.

  • LPC11U14 CPU can be reprogrammed over the USB link.

AltOS Bug Fixes

We also fixed a few bugs in the firmware:

  • Make sure the startup flight computer beeps are consistent. Sometimes, it was taking long enough to beep out the battery voltage that the flight computer state was changing in the middle, causing a bit of confusion.

  • Change TeleDongle's LED indicators during telemetry reception. The green LED blinks on successful packet reception, and the red LED blinks when a packet with an invalid checksum is received.

  • The SPI driver used in both TeleDongle v3 and TeleGPS has been rewritten to avoid locking up under heavy CPU load. If you've got a TeleGPS board, you'll want to reflash with new firmware.

AltosUI and TeleGPS applications

A few minor new features are in this release

  • AltosUI can now compute and display tilt angle when graphing eeprom log data from TeleMega and EasyMega.

  • The AltosUI tool window is shown when starting with a data file. This way, when you double-click on a file in the file manager, you'll get the whole AltosUI interface, rather than just the graphing window.

  • At the end of replaying an old log file, stick 'done' in the Age field so you can tell the recording is over.

Bug Fixes

There are a bunch of minor bug fixes, including the usual collection of attempts to make stuff behave better on Windows platforms.

  • Use a different Windows API to discover USB device ids. This works better on my new HP Windows 7 machine. Maybe it will work better for other people too?

  • Look in more places in the Windows registry to try and find the installed Java version. It appears that the default Java download from Oracle is a 32-bit version? In any case, that version sticks its install information in a different spot in the registry.

  • Fix file associations on Windows when Java isn't installed in the system root.

  • Make 'Scan Channels' work better with new AltOS firmware which only reports device configuration information once ever five seconds.

February 08, 2015 07:57 AM

February 05, 2015

Richard Hughes, ColorHug

Ambient Light Sensors

An ambient light sensor is a little light-to-frequency chip that you’ve certainly got in your tablet, most probably in your phone and you might even have one in your laptop if you’re lucky. Ambient light sensors let us change the panel brightness in small ways so that you can still see your screen when it’s sunny outside, but we can dim it down when the ambient room light is lower to save power. Lots of power.

There is a chicken and egg problem here. Not many laptops have ambient light sensors; some do, but driver support is spotty and they might not work, or work but return values with some unknown absolute scale. As hardware support is so bad, we’ve not got any software that actually uses the ALS hardware effectively, and so most ALS hardware goes unused. Most people don’t actually have any kind of ALS at all, even on high-end models like Thinkpads

So, what do we do? I spent a bit of time over the last few weeks designing a small OpenHardware USB device that acts as a ALS sensor. It’s basically a ColorHug1 with a much less powerful processor, but speaking the same protocol so all the firmware update and test tools just work out of the box. It sleeps between readings too, so only consumes a tiiiiny amount of power. I figure that with hardware that we know works out of the box, we can get developers working on (and testing!) the software integration without spending hours and hours compiling kernels and looking at DSDTs. I was planning to send out devices for free to GNOME developers wanting to hack on ALS stuff with me, and sell the devices for perhaps $20 to everyone else just to cover costs.


The device would be a small PCB, 12x22mm in size which would be left in a spare USB slot. It only sticks out about 9mm from the edge of the laptop as most of the PCB actually gets pushed into the USB slot. It’s obviously non-ideal, and non-sexy, but I really think this is the way to break the chicken/egg problem we have with ALS sensors. It obviously costs money to make a device like this, and the smallest batch possible is about 60 – so before I spend any more of my spare money/time on this, is anyone actually interested in saving tons of power using an ALS sensor and dimming the display? Comment here or email me if you’re interested. Thanks.

by hughsie at February 05, 2015 04:44 PM

Andrew Zonenberg, Silicon Exposed

How to lose my business permanently

This post is a bit different from my usual ones in that it discusses the business side of the semiconductor industry, not the technical side. The issue has been getting more and more problematic lately so I figured I'd write up a few quick thoughts on it.

Let's suppose you're a large semiconductor company who is currently making a large amount of money selling chips to a couple of major customers. You've decided that your business is too big and you have no desire to get new customers, now or in the future. In fact, you don't even want these companies to use your products in new designs. What are some ways you can get rid of these pesky engineers trying to throw money at you?
  1. Make your parts hard to find. Ask major distributors like Digi-Key and AVnet to discontinue stocking them.
  2. If someone does manage to find an authorized sales partner, pester them with questions even if they're just looking for a budgetary price quote for a feasibility study. Ask for a project name, description, business plan, names of the team members, color of the soldermask, logo, and anything else you can think of. If it looks like an initial proof of concept that the customer isn't yet confident will become a high-volume product, or a one-off test fixture/lab tool, badger them by asking about annual sales volume and volume ramp-up dates until they lose interest and buy from a competitor.
  3. Just in case anyone actually succeeds in buying your part, make it useless to them. Keep the datasheet locked up in a steel vault in your corporate headquarters. Promise would-be customers that you'll let them see it if they sign away their firstborn son and sacrifice a golden lamb on an altar made of FPGAs, but hide the actual NDA contract behind so many redirect pages and broken links that nobody can actually sign it, much less see the actual datasheet. Bonus points if your chip is something commodity like a gigabit Ethernet PHY that has nothing even remotely sensitive in the datasheet.
If you follow these rules properly, congratulations! I'll do my part to further your goals by making sure you will never get design wins in any projects I'm involved in, especially high-volume ones for large companies. Your shareholders will be overjoyed.

by Andrew Zonenberg ( at February 05, 2015 01:54 AM

February 02, 2015


Reverse-engineering of KR580VM80A / i8080 is complete!

We are glad to announce that reverse engineering of KR580VM80A (the most popular CPU in the exUSSR, Intel 8080-compatible) is finally complete. Insane engineer Vslav (1801ВМ recovered full schematic in a very short time. After we got annotation and sorted out license (CC-BY-3.0) - it is available for everyone to enjoy.

It appeared that it had exactly 4758 transistors (contrary to rumors of 6000 or 4500).

Layout of KR580VM80A is quite similar though not identical to i8080, but there were no differing (vs i8080) opcodes identified.

Verilog model of KR580VM80A passed tough compatibility test both in simulation and as FPGA replacing actual KR580VM80A in "Specialist" computer.

Download links: Main verilog, Schematic, Full package.

Die annotation:

February 02, 2015 08:38 AM

NXP 74AHC00 : Weekend die-shot

What is the simplest possible microchip? Probably it's 7400 - quad 2-input NAND gate. We made a die shot of NXP's 74AHC00 (AHC means "fast" CMOS). This is a nice example that 'old' tech nodes (1µm and older) are still in use. Also, note how many spare via are there.

Die size - 944x854 µm.

Update 02.02.2015: We redid this chip with better quality. Original photo (September 2, 2012). Looking at the quality difference gives me a warm feeling, 2.5 years well spent...

February 02, 2015 08:17 AM

January 31, 2015

Bunnie Studios

Name that Ware, January 2015

The Ware for January 2015 is below.

“I love capacitor”

but why?

Been in Shenzhen the past two weeks, trying to beat Chinese New Year deadlines, improve my Chinese, and learn more about manufacturing and supply chains. So far, so good. Will have more updates soon!

by bunnie at January 31, 2015 10:53 AM

Winner, Name that Ware December 2014

The Ware for December 2014 is a Molecular Devices unity-gain headstage. It features an ultra-high impedance and low noise to allow the measurement of very tiny currents. I’d say Hugo had the closest answer of them all, congrats and email me for your prize!

by bunnie at January 31, 2015 10:49 AM

January 29, 2015

Free Electrons

Embedded Linux Conference schedule announced, several talks from Free Electrons

The schedule for the upcoming Embedded Linux Conference, which takes place on March 23-25 in San Jose, has been announced and is available publicly at, together with the Android Builders Summit schedule. As usual, there are lots of talks that look very interesting, so we can expect a very useful conference once again.

ELC 2015

This time around, there will be three talks given by Free Electrons engineers:

So, book your tickets, and join us for the Embedded Linux Conference at the end of March!

by Thomas Petazzoni at January 29, 2015 09:49 AM

January 28, 2015

Richard Hughes, ColorHug

Detecting fake flash

I’ve been using F3 to check my flash drives, and this is how I discovered my drives were counterfeit. It seems to me this kind of feature needs to be built inside gnome-multi-writer itself to avoid sending fake flash out to customers. Last night I wrote a simple tool called gnome-multi-writer-probe which does the following few things:

* Reads the existing data from the drive in 32kb chunks every 32Mbish into RAM
* Writes random blocks of 32kb every 32MBish, and also stores in RAM
* Resets the drive
* Reads all the 32k blocks from slightly different addresses and sizes and compares them to the random data in RAM
* Writes all the saved data back to the drive.

I only takes a few seconds on most drives. It also tries to be paranoid, and saves the data back to the drive the best it can when it encounters an error. That said, please don’t use this tool on any drives that have important data on them; assume you’ll have to reformat them after using this tool. Also, it’s probably a really good idea to unmount any drives before you try this.

If you’ve got access to gnome-multi-writer from git (either from jhbuild, or from my repo) then please could you try this:

sudo gnome-multi-writer-probe /dev/sdX

Where sdX is the USB drive you want to test. I’d be interested of the output, and especially interested if you have any fake flash media you can test this with. Either leave a comment here, grab me on IRC or send me an email. Thanks.

by hughsie at January 28, 2015 02:01 PM

Free Electrons

Free Electrons at FOSDEM 2015

FOSDEM BannerFor many open-source developers based in Europe, the FOSDEM is probably the most useful, interesting and exciting conference. Once again this year, several Free Electrons engineers will attend the conference:

  • Maxime Ripard, mainly involved in Allwinner related kernel development, as well as more recently OpenWRT support for Marvell platforms
  • Antoine Ténart, involved in Marvell Berlin related kernel development, and one of the developers of our Yocto Project and OpenEmbedded training course
  • Alexandre Belloni, involved in Atmel processors related kernel development, and also one of our Yocto expert.
  • Thomas Petazzoni, involved in Marvell EBU processors related kernel development, and doing a lot of Buildroot contributions.

If you are attending, and want to know more about Free Electrons, or discuss career or project opportunities, do not hesitate to contact us prior to the conference. Many of us will probably attend a significant number of talks from the Embedded track, so it should be easy to find us.

Last but not least, Alexandre Belloni will be giving a talk about Starting with the Yocto Project, which will take place on Sunday, at 3 PM in room Lameere.

Finally, Thomas Petazzoni has organized and will participate to the Buildroot Developers Meeting organized right after FOSDEM, and sponsored by Google and Mind.

by Thomas Petazzoni at January 28, 2015 08:48 AM

January 27, 2015

Richard Hughes, ColorHug

Scammers at

tl;dr Don’t use, they are scammers that sell fake flash.

Longer version: For the ColorHug project we buy a lot of the custom parts direct from China at a fraction of the price available to us in the UK, even with import tax considered. It would be impossible to produce such a low cost device and still make enough money to make it worth giving up our evenings and weekends. This often means sending thousands of dollars to sketchy-looking companies willing to take on small (to them!) custom orders of a few thousand parts.

So far we’ve been very lucky, until last week. I ordered 1000 customized 1GB flash drives to use as a LiveUSB image rather than using a LiveCD. I checked out the company as usual, and ordered a sample. The sample came back good quality, with 1GB of fast flash. Payment in full was sent, which isn’t unusual for my other suppliers in China.

Fast forward a few weeks. 1000 USB drives arrived, which look great. Great, until you start using them with GNOME MultiWriter, which kept throwing validation warnings. Using the awesome F3 and a few remove-insert cylces later, the f3probe tool told me the flash chip was fake, reporting the capacity to be 1GB, when it was actually 96Mb looped around 10 times.

Taking the drives apart you could also see the chip itself was different from the sample, and the plastic molding and metal retaining tray was a lower quality. I contacted the seller, who said he would speak to the factory later that day. The seller got back to me today, and told me that the factory has produced “B quality drives” and basically, that I got what I paid for. For another 1600USD they would send me the 1GB ICs, which I would have to switch in the USB units. Fool me once, shame on you; fool me twice, shame on me.

I suppose people can use the tiny flash drives to get the .icc profile off the LiveCD image, which was always a stumbling block for some people, but basically the drives are worthless to me as LiveUSB devices. I’m still undecided whether to include them in the ColorHug box; i.e. is a free 96Mb drive better than them all going into landfill?

As this is China, I understand all my money is gone. The company listing is gone from Alibaba, so there’s not a lot I can do there. So other people can hopefully avoid this same mistake, I’ve listed all the details here, which hopefully will become googleable:

Promo-Newa Electronic Limited(Shenzhen)
Wei and Ping Group Limited(Hongkong)  

Office: Building A, HuaQiang Garden, North HuaQiang Road, Futian district, Shenzhen China, 0755-3631 4600
Factory: Building 4, DengXinKeng Industrial Zone, JiHua Road,LongGang District, Shenzhen, China
Registered Address: 15/B—15/F Cheuk Nang Plaza 250 Hennessy Road, HongKong
Skype: promonewa

by hughsie at January 27, 2015 03:11 PM

Free Electrons

Meet us at Embedded World 2015!

Atmel booth at Embedded World 2014Free Electrons will be present at Embedded World 2015 in Nüremberg, Germany on February 24-26. We will be present on the Atmel Corporation booth (4A-220) to demonstrate our Atmel-related developments and offerings.

Four people from Free Electrons will be present: Michael Opdenacker (CEO), Thomas Petazzoni (CTO), Anja Roubin (training operations) and Alexandre Belloni (embedded Linux engineer).

Do not hesitate to get in touch with us prior to the event if you would like to schedule a meeting to discuss business, project or career opportunities.

If you are interested in our training services, we will have very special discount vouchers for people who visit us at Embedded World.

You will also be able to ask us for free advise during the trade show. We have vast experience on embedded Linux and its kernel, and we will be most happy to give you ideas and pointers to resources that should be useful for your projects.

by Thomas Petazzoni at January 27, 2015 08:54 AM

2015 Q1 newsletter

This article was published on our quarterly newsletter.

The Free Electrons team wishes you a Happy New Year for 2015, with plenty of optimism and energy!

Free Electrons is happy to take this opportunity to share some news about the latest training and contribution activities of the company.

Kernel contributions

We continue to work significantly on support for various ARM processors in the Linux kernel. Our contributions to the latest kernel releases:

  • 147 patches from Free Electrons merged in Linux 3.17, making Free Electrons the 14th contributing company for this release by number of patches. See our blog post about this release.
  • 155 patches from Free Electrons merged in Linux 3.18, making Free Electrons the 14th contributing company. See our blog post for more details.
  • For the upcoming 3.19 release, we already have 196 patches merged.

One of the highlights was that we added support for the Atmel SAMA5D4 SoC to the Linux kernel even before the new chip was announced by Atmel! That’s a very positive sign for customers when an SoC is supported in the mainline Linux kernel sources right at product launch, instead of having to wait for months or years before the community developers can catch up.

Note that we also added Atmel SAMA5D3 SoC support to Xenomai, a hard real-time extension for the Linux kernel. Thanks to this, the Atmel SAMA5D3 Xplained board can now run with the 2.6.x release of Xenomai.

Besides those highlights, most of our kernel contributions were as usual centered around support for specific families of ARM processors: CPUs from Marvell EBU and Marvell Berlin, from Atmel and from Allwinner. We added a new network driver for some Marvell EBU processors, added SMP support for Marvell Berlin processors, added a DMA controller driver for Allwinner processors, and did a lot of maintenance work to support these processors in the mainline kernel.

Buildroot contributions

Our involvement into the Buildroot project, a popular embedded Linux build system, is going on. Our engineer Thomas Petazzoni contributed 136 patches to the 2014.11 release, making him the second contributor by number of patches. Thomas is also taking care of the maintenance of the project on a more regular basis, reviewing and merging patches from contributors.

OpenWRT contributions

We have recently started contributing to the OpenWRT project: improve the kernel support to use defconfig, introduce a notion of board to support different NAND configurations for each platform. We will soon to be pushing support for the Marvell Armada 385 platform, and improved support for the Marvell Armada 370 and XP platforms.

Recent projects

Besides our publicly visible kernel contributions, we do also work on customer-specific projects. Among the latest projects we have done:

  • Develop a complete Board Support Package for a custom TI AM335x based platform: U-Boot porting, Linux kernel porting, and development of a Yocto-generated system. Qt5 and OpenGL are used for the graphical application, a fairly complex audio setup had to be supported, and many traditional interfaces as well (USB Host and Device, CAN, display, etc.)
  • Develop a Board Support Package for a custom Marvell Armada 375 based platform for a telephony system. Not only did we port a Linux kernel on this platform, but we also wrote several DAHDI drivers to interface the telephony hardware of the platform with Asterisk.
  • NAND and UBI stress-testing for a customer-specific Freescale i.MX28 based platform. We improved the NAND controller driver, added a new MTD tool to generate bitflips, and did some long term power-cut stress-testing of the UBIFS setup to ensure the reliability of the platform. See our kernel driver improvements and the new nandflipbits tool.
  • Adapt an existing ADC driver for a customer-specific platform to the modern Industrial Input Output (IIO) subsystem of the kernel.

Conferences: FOSDEM, Embedded World and Embedded Linux Conference

Several Free Electrons engineers will participate to the FOSDEM conference, taking place on January 30 and February 1 in Brussels. In addition, Thomas Petazzoni will be participating to the Buildroot Developers Meeting that takes place right after FOSDEM in the Google offices in Brussels.

Free Electrons will participate to the Embedded World trade show on February 24-26 in Nuremberg, Germany. We will be present at Atmel’s booth and visiting exhibitor booths too. For people in Europe, this will be a good opportunity to ask your questions about our embedded Linux training and engineering services. In particular, you will be able meet our engineers Alexandre Belloni, Thomas Petazzoni (CTO), Michael Opdenacker (CEO) and Anja Roubin as well, the new person in charge of our training services.

This year again, most of the Free Electrons engineering team (7 engineers) will participate to the 2015 edition of the Embedded Linux Conference on March 23-25 in San Jose, California. We submitted several talk proposals, but our presence won’t depend on the number of talks that are eventually accepted. Participating to this conference, and to its European edition in the fall too, is very important for us to make sure we do not miss any of the interesting developments in the technical community, and above all to strengthen our ties with the community developers. This helps us to be good technical trainers with valuable experience and information to share. The strong relationships with other community developers (and in particular with project maintainers) also help us when our customers contract us to add hardware support or features to official versions of community projects such as the Linux kernel.

Free technical documentation resources

Since the latest edition of this newsletter, we started running our new Yocto Project and OpenEmbedded course, and we released all training materials for this course. As usual, such materials are meant to be used by people learning by themselves too. All you have to do is get your hands on a Beaglebone Black board, read the slides and try to do the labs!

Our engineer Maxime Ripard also contributed documentation about the DMAEngine subsystem in the Linux kernel.

Upcoming training sessions – Now in Paris too!

The news is that we will run new public sessions in Paris, in addition to the ones we usually organize in Toulouse, Avignon and Lyon in France. We are starting with our embedded Linux and our Yocto courses, but other topics will follow too.

So, here are our next session dates:

See sessions and dates for more details. Of course, we can also deliver our training courses at your location, anywhere in the world. Feel free to contact us for a quote.

If you are interested in more frequent news about Free Electrons, you can follow us on Twitter, Google+ and LinkedIn.

by Michael Opdenacker at January 27, 2015 05:35 AM

January 23, 2015


MC34063 : Weekend die-shot

MC34063 is by far the most widespread DC-DC switching regulator.

Update 23.01.2015: We've returned to this chip while testing new optical and etching setup. Here is it again with metalization stripped. Original article was published 25.11.2012.

January 23, 2015 03:12 PM

January 21, 2015

Richard Hughes, ColorHug

Plugable USB Hubs

Joshua from Plugable sent me 4 different USB hubs this week so they could be added as quirks to gnome-multi-writer. If you’re going to be writing lots of USB drives, the Plugable USB3 hubs now work really well. I’ve got a feeling that inserting and removing the drive is going to be slower than the actual writing and verifying now…

by hughsie at January 21, 2015 08:30 PM

Moving update information from the distribution to upstream

I’ve been talking to various people about the update descriptions we show to the user. Without exception, the messages we show to end users are really bad. For example, the overly-complex-but-not-actually-useful:

Screenshot from 2015-01-21 10:56:34

Or, the even more-to-the-point:

Update to 3.15.4

I’m guilty of both myself. Why is this? Typically this text is written an over-worked and under-paid packager doing updates to many applications and packages. Sometimes the packager might be the upstream maintainer, or at least involved in the project, but many times it’s just some random person that got fingered to maintain a particular package. This doesn’t make an awesome person to write beautiful prose and text that thousands of end users are going to read. It also doesn’t make sense to write the same beautiful prose again and again for every distribution out there.

So, what do we need? We need a person who probably wrote the code, or at least signed it off, who cares about the project and cares about the user experience. i.e. the upstream maintainer.

What I’m proposing is that we ask upstream maintainers to write the release information in a way that can be shown in the software center. NEWS files are not stanardized, and you don’t typically have a NEWS file for each application in your upstream tarball, so we need something else.

Suprise suprise, it’s AppStream to the rescue. AppStream has a <release> object that fits the bill almost completely; you can put upstream version information and long (optionally translated) formatted descriptions.

Of course, you don’t want to write both NEWS and the various appdata files at release time, as that just increased the workload of the overly-busy upstream maintainer. In this case we can use appstream-util appdata-to-news in the buildsystem and generate the former from the latter automatically. We’re outputting markdown for the NEWS file, which seems to be a fairly good approximation of what NEWS files actually look like at least for GNOME.

For a real-world example, see the GNOME MultiWriter example commit that uses this.

There are several problems with this approach. One is that the translators might have to translate lots more text; and the obvious solution to that seems to be to only mark strings to be translated for stable versions. Alas, projects like GNOME don’t allow any new strings in stable versions, so we’ll either have to come up with an except for release notes ammendment to that, or just say that all the release notes are only ever available in C locale.

The huge thing to take away from this blog, if you are intending to use this new feature is that update descriptions have to be understandable by end users. Various bug fixes is not helpful, but Fixes a crash when adding a joystick is. End users neither care or understand Use libgusb rather than libusbx and technical details that do not affect the UI or UX of the application should be omitted.

This doesn’t work for distribution releases, e.g. 3.14.1-1 to 3.14.1-2, but typically these are not huge changes that we need to show so prominently to the user.

I’m also writing a script, so if anyone wants to take the plunge on an existing project it might be good to wait for that unless you like lots of copy and pasting.

Comments, as always, welcome.

by hughsie at January 21, 2015 11:00 AM

Andrew Zonenberg, Silicon Exposed

TDR updates

A few months ago, I wrote about a project I had been thinking of for a while but not had time to work on: a time-domain reflectometer (TDR) for testing twisted pair Ethernet cables.

TDR background

The basic theory of operation is simple: Send a pulse down a transmission line and measure the reflected voltage over time to get a plot of impedance discontinuities over time. Unfortunately, doing this with sufficient temporal resolution (sub-nanosecond) requires extremely high analog sampling rates, and GHz A/D converters are (to say the least) not cheap: the least expensive 1 GSA/s ADC on Digi-Key is the HMCAD1520, which sells for $120 each at the time of this writing. Higher sampling rates cost even more, the 1.5 GSa/s ADC081500CIYB is listed at $347.

One possible architecture would consist of a pre-amplifier for each channel, a 4:1 RF mux, and a single high-speed ADC sampled by an FPGA. This would work, but seemed quite expensive and I wanted to explore lower-cost options.

ADC architecture

After thinking about the problem for a while, I realized that the single most expensive component in a classical TDR was probably the ADC - but there was no easy way to make it cheaper. What if I could eliminate the ADC entirely?

I drew inspiration from the successive-approximation-register (SAR) ADC architecture, which essentially converts a DAC into an ADC by binary searching. The basic operating algorithm is as follows:
  • For each point T in time
    • vstart = 0
    • vend = Vref
    • Set DAC to (Vstart + Vend) / 2
    • Compare Vin against Vdac
    • If Vin > Vdac
      • Set current bit of sample to 1
      • Set Vstart to Vdac, update ADC, repeat
    • else
      • Set current bit of sample to 0
      • Set Vend to Vdac, update ADC, repeat
The problem with SAR for high speeds is that N-bit analog resolution at M samples per second requires a DAC that can run at O(M log N) samples per second - hardly an improvement!

In order to work around this problem, I began to think about ways to represent the data generated by a SAR ADC. I ended up modeling a simplified SAR ADC which performed a linear, rather than binary, search. We can represent the intermediate data as a matrix of 2^N rows by M columns, one for each of M data points.

The sampling algorithm for this simplified ADC works as follows:
  • For each point T in time 
    • Set DAC to 0
    • Compare Vin against Vdac
    • If Vin > Vdac
      • Set column[Vdac] to 1
      • Increment Vdac
      • Repeat comparison
    • Otherwise stop and capture the next sample
Once we have this matrix, we can simply sum the number of 1s in each column to calculate the corresponding sample value.

While this approach will clearly work, it is exponentially slower than the conventional SAR ADC since it requires 2^N samples instead, instead of N, for N-bit precision. So why is it useful?

Now consider what happens if we acquire the data from a transposed version of the same matrix:
  • For each Vdac from 0 to Vref
    • For each point T in time
      • Compare Vin against Vdac
      • If Vin > Vdac
        • Set row Vin of column T to 1
    • Increment Vref
    • Go back in time and loop over the signal again
This version clearly captures the same data, since matrix[T][V] is set to true iff sample T is less than V. We simply switch the inner and outer loops.

It also has a very interesting property for cost optimization: Since it only updates the DAC after sampling the entire signal, we can now use a much slower (and cheaper) DAC than with a conventional SAR. In addition, the comparator can now update at the sampling frequency instead of 2^N times the sampling frequency.

There's just one problem: It requires time travel! Why are we wasting our time analyzing a circuit that can't actually be built?

Well, as it turns out we can solve this problem too - with "parallel universes". Since the impedance of the cable is (hopefully) fairly constant over time, if we send multiple pulses they should return identical reflection waveforms. We can thus send out a pulse, test one candidate Vdac value against this waveform, then increment Vdac and repeat.

The end result is that with a cheap SPI DAC, a high-speed comparator, and an FPGA with a high-speed digital input we can digitize a repetitive signal to arbitrary analog precision, with sampling rate limited only by comparator bandwidth and FPGA input buffer performance!

Pulse generation

The first step in any TDR, of course, is to generate the pulse.

I spent a while looking over FPGAs and ended up deciding on the Xilinx Kintex-7 series, specifically the XC7K70T. The -1 speed grade can do 1.25 Gbps LVDS in the high-performance I/O banks (matching the -2 and -3 speed Artix-7 devices) and the higher speed grades can go up to 1.6 Gbps.

The pulse is generated by using the OSERDES of the FPGA to produce a single-cycle LVDS 1 followed by a long series of 0s. The resulting LVDS pulse is fed into a Micrel SY58603U LVDS-to-CML buffer. This slightly increases the amplitude of the output pulse and sharpens the rise time up to 80ps.

The resulting pulse is then sent through the RJ45 connector onto the cable being tested.

Output buffer

Input preamplifier

The reflected signal coming off the differential pair is AC coupled with a pair of capacitors to prevent bus fights between the (unequal) common-mode voltages of the output buffer and the preamplifier. It is then fed into a LMH6881 programmable-gain preamplifier.

This is by far the most pricey analog component I've used in a design: nearly $10 for a single amplifier. But it's a very nice amplifier (made on a SiGe BiCMOS process)- very high linearity, 2.4 GHz bandwidth, and gain from 6 to 26 dB programmable over SPI in 0.25 dB steps.

Input preamplifier
The optional external terminator (R25) is intended to damp out any reflections coming off of the preamplifier if they present a problem; during the initial assembly I plan to leave it unpopulated. Since this is my first high-speed mixed signal design I'm trying to make it easy to tweak if I screwed up somehow :)

The output of the preamplifier is a differential signal with 2.5V common mode offset.

Differential to single-ended conversion

The next step is to convert the differential voltage into a single-ended voltage that we can feed into the comparator. I use an AD8045 unity gain voltage feedback amplifier for this, configured to compute CH1_VDIFF = (CH1_BUF_P - CH1_BUF_N)  + 2.5V.


The single-ended voltage is compared against the DAC output (AFE_VREF) using one half of an LMH7322 dual comparator.

The output supply of the comparator is driven by a 2.5V supply to produce LVDS-compatible differential output voltage levels.

PCB layout

The board was laid out in KiCAD using the new push-and-shove router. All of the differential pairs were manually length-matched to 0.1mm or better.

The upper left corner of the board contains four copies of the AFE. The AD8045s are on the underside of the board because the pinout made routing easier this way. Hopefully the impedance discontinuities from the vias won't matter at these signal speeds...

AFE layout, front side
AFE layout, back side
The rest of the board isn't nearly as complex: the lower left has a second RJ45 connector and a RGMII PHY for interfacing to the host PC, the power supply is at the upper right, and the FPGA is bottom center.

The power supply is divided into two regions, digital and analog. The digital supply is on the far right side of the board, safely isolated from the AFE. It uses an LTC3374 to generate an 1.0V 4A rail for the FPGA core, a 1.2V 2A rail for the FPGA transceivers and Ethernet PHY, a 1.8V 1A rail for digital I/O, and a 2.5V rail for the CML buffers and Ethernet analog logic.

The analog supply was fairly close to the AFE so I put a guard ring around it just to be safe. It consists of a LTC3122 boost converter to push the 5V nominal input voltage up to 6V, followed by a 5V LDO to give a nice clean analog rail. I ran the output of the LDO through a pi filter just to be extra safe.

The TDR subsystem didn't use any of the four 6.6 Gbps GTX serial transceivers on the FPGA because they are designed to recover their clock from the incoming signal and don't seem to support use of an external reference clock. It seemed a shame to waste them, though, so I broke them (as well as 20 0.95 Gbps LVDS channels) out to a Samtec Q-strip header for use as high-speed GPIO.

Without further ado, here's the full layout. I could have made the board quite a bit smaller in the vertical axis but I needed to keep a constant 100mm high so it would fit in the card guides on my Eurocard rack.

The board is at fabs for quotes now and I'll make another post once the boards come back.

Layer 1
Layer 2 (ground)
Layer 3 (power)
Layer 4, flipped to make text readable

by Andrew Zonenberg ( at January 21, 2015 02:07 AM

January 16, 2015

Andrew Zonenberg, Silicon Exposed

Electronic Privacy: A Realist's Perspective

    Note: I originally wrote this in a Facebook note in March 2012, long before any of the recent leaks. I figured it'd be of interest to a wider audience so I'm re-posting it here.
    There's been a lot of hullabaloo lately about Google's new privacy policy etc so I decided to write up a little article describing my personal opinions on the subject.
    Note that I'm describing defensive policies which may be a bit more cynical than most people's, and not considering relevant laws or privacy policies at all. The assumption being made here is that if it's possible, and someone wants it to happen enough, they will make it happen regardless of whether it's legal.

    RULE 1: If it's on someone else's server, and not encrypted, it's public information.
    Rationale: Given the ridiculous number of data breaches we've had lately it's safe to say that any sufficiently motivated and funded person / agency could break into just about any company storing data they're interested in. On top of this, in many countries government agencies have a history of sending companies subpoenas asking for data they're interested in, which is typically forked over with little or no question.
    This goes for anything from your Facebook profile to medical/financial records to email.
    RULE 1.1: Privacy settings/policies keep honest people honest.
    Rationale: Hackers and government agencies, especially foreign ones, don't have to play by the rules. Services have bugs. Always assume that your privacy settings are wide open and set them tighter only as an additional (small) layer of defense.
    RULE 2: If it's encrypted, but you don't control the key completely, it's public information.
    Rationale: Encryption is only as good as your key management. If somebody else has the key they're a potential point of failure. Want to bet $COMPANY's key management isn't as good as yours? Also, if $COMPANY can be forced/tricked/hacked into turning over the key without your knowledge, the data is as good as public anyway.
    RULE 3: If someone can talk to it, they can root it.
    Rationale: It's pretty much impossible to say "there are no undiscovered bugs in this code" so it's safest to assume the worst... there is a bug in your operating system / installed software and anyone with enough time or money can find or buy an 0day. Want to bet there are NO security-related bugs in the code your box is running? Me neither. If your system isn't airgapped assume it could have been pwned.
    RULE 4: If it goes over an RF link and isn't end-to-end encrypted, it's public information.
    Rationale: This includes wifi (even with most grades of WEP/WPA encryption), cellular links, and everything else of that nature. Sure, the carrier may be encrypting your SMS/voice calls with some proprietary scheme of uncertain security, but they have the key so Rule 2 applies.
    RULE 5: If you have your phone with you, your whereabouts and anything you say is public information.
    Rationale: This can be derived from Rule 3. Your phone is just a computer and third parties can communicate with it. Since it includes a microphone and GPS, assume the device has been rooted and they're logging to $BADGUY on a 24/7 basis.
    RULE 6: All available data about someone/something can and will be correlated.
    Rationale: If two points of data can be identified as related, someone will figure out a way to combine them. Examples include search history (public according to Rule 1), identical usernames/emails/passwords used on different services, and public records. If someone knows that JoeSchmoe1234 said $FOO on and someone else called JoeSchmoe1234 said $BAR on it's a pretty safe bet both comments came from the same person who's interested in gaming and hacking.

by Andrew Zonenberg ( at January 16, 2015 09:32 PM

Threat modeling for FPGA software backdoors

I've been interested in the security of compilers and related toolchains ever since I first read about Ken Thompson's compiler backdoor many years ago. In a nutshell, this famous backdoor does two things:

  • Whenever the backdoored C compiler compiles the "login" command, it adds a second code path that accepts a hard-coded default password in addition to the user's actual password
  • Whenever the backdoored C compiler compiles the unmodified source code of itself, it adds the backdoor to the resulting binary.
The end result is a compiler that looks fine at the source level, silently backdoors a critical system file at compilation time, and reproduces itself.

Recently, there has also been a lot of concern over backdoors in integrated circuits (either added at the source code level by a malicious employee, or at the netlist/layout level by a third-party fab). DARPA even has a program dedicated to figuring out ways of eliminating or detecting such backdoors. A 2010 paper stemming from the CSAW Embedded Systems Challenge presents a detailed taxonomy of such hardware Trojans.

As far as I can tell, the majority of research into hardware Trojans has been focused on detecting them, assuming the adversary has managed to backdoor the design in some way that provides him with a tactical or strategic advantage. I have had difficulty finding detailed threat modeling research quantifying the capability of the adversary under particular scenarios.

When we turn our attention to FPGAs, things quickly become even more interesting. There are several major differences between FPGAs and ASICs from a security perspective which may grant the adversary greater or lesser capability than with an ASIC.

Attacks at the IC fab

The function of an ASIC is largely defined at fab time (except for RAM-based firmware) while FPGAs are extremely flexible. When trying to backdoor FPGA silicon the adversary has no idea what product(s) the chip will eventually make it into. They don't even know which pins on the die will be used as inputs and which as outputs.

I suspect that this places substantial bounds on the capability of an attacker "Malfab" targeting FPGA silicon at the fab (or pre-fab RTL/layout) level since the actual RTL being targeted does not even exist yet. To start, we consider a generic FPGA without any hard IP blocks:
  • Malfab does not know which flipflops/SRAM/LUTs will eventually contain sensitive data, nor what format this data may take.
  • Malfab does not know which I/O pins may be connected to an external communications interface useful for command-and-control.
As a result, his only option is to create an extremely generic backdoor. At this level, the only thing that makes sense is to connect all I/O pins (perhaps through scan logic) to a central malware logic block which provides the ability to read (and possibly modify) all state in the device. This most likely would require two major subsystems:
  • A detector, which searches I/Os for a magic sync sequence
  • A connection from that detector to the FPGA's internal configuration access port (ICAP), used for partial reconfiguration and readback.
The design of this protocol would be very challenging since the adversary does not know anything about the external interfaces the pin may be connected to. The FPGA could be in a PLC or similar device whose only external contact is RS-232 serial on a single input pin. Perhaps it is in a network router/switch using RGMII (4-bit parallel with double data rate signalling).

I am not aware of any published work on the feasibility of such a backdoor however I am skeptical that a sufficiently generic Trojan could be made simple enough to evade even casual reverse engineering of the I/O circuitry, and fast enough to not seriously cripple performance of the device.

Unfortunately for our defender Alice, modern FPGAs often contain hard IP blocks such as SERDES and RAM controllers. These present a far more attractive target to Malfab as their functionality is largely known in advance.

It is not hard to imagine, for example, a malicious patch to the RAM controller block which searches each byte group for a magic sync sequence written to consecutive addresses, then executes commands from the next few bytes. As long as Malfab is able to cause the target's system to write data of his choice to consecutive RAM addresses (perhaps by sending it as the payload of an Ethernet frame, which is then buffered in RAM) he can execute arbitrary commands on the backdoor engine. If one of these commands is "write data from SLICE_X37Y42.A5FF to RAM address 0xdeadbeef", and Malfab can predict the location of a transmit buffer of some sort, he now has the ability to exfiltrate arbitrary state from Alice's hardware.

I thus conjecture that the only feasible way to backdoor a modern FPGA at fab time is through hard IP. If we ensure that the JTAG interface (the one hard IP block whose use cannot be avoided) is not connected to attacker-controlled interfaces, use off-die SERDES, and use softcore RAM controllers on non-standard pins, it is unlikely that Malfab will be able to meaningfully affect the security of the resulting circuit.

Attacks on the toolchain

We now turn our attention to a second adversary, Maldev - the malicious development tool. Maldev works for the FPGA vendor, has compromised the source repository for their toolchain, has MITMed the download of the toolchain installer, or has penetrated Alice's network and patched the software on her computer.

Since FPGAs are inherently closed systems (more so than ASICs, in which multiple competing toolchains exist), Alice has no choice but to use the FPGA vendor's binary-blob toolchain. Although it is possible in theory for a dedicated team with sufficient time and budget to reverse engineer the FPGA and/or toolchain and create a trusted open-source development suite, I discount the possibility for the scope of this section since a fully trusted toolchain is presumably free of interesting backdoors ;)

Maldev has many capabilities lacking to Malfab since he can execute arbitrary code on Alice's computer. Assuming that Alice is (justifiably) paranoid about the provenance of her FPGA software and runs it on a dedicated machine in a DMZ (so that it cannot infect the remainder of her network), this is equivalent to having full access to her RTL and netlist at all stages of design.

If Alice gives her development workstation Internet access, Maldev now has the ability to upload her RTL source and/or netlist, modify it at will on his computer, and then push patches back. This is trivially equivalent to a full defeat of the entire system.

Things become more interesting when we cut off command-and-control access. This is a realistic scenario if Alice is a military/defense user doing development on a classified network with no Internet connection.

The simplest attack is for Maldev to store a list of source file hashes and patches in the compromised binary. While this is very limited (custom-developed code cannot be attacked at all), many design teams are likely to use a small set of stock communications IP such as the Xilinx Tri-Mode Ethernet MAC, so patching these may be sufficient to provide him with an attack vector on the target system. Looking for AXI interconnect IP provides Maldev with a topology map of the target SoC.

Another option is graph-based analytics on the netlist at various stages of synthesis. For example, by looking for a 32-bit register initialized to 0x67452301, which is in a strongly connected component with three other registers initialized to 0xefcdab89, 0x98badcfe, and 0x10325476, Maldev can say with a high probability that he has found an implementation of MD5 and located the state registers. By looking for a 128-bit comparator between these values and another 128-bit value, a hash match check has been found (and a backdoor may be inserted). Similar techniques may be used to look for other cryptography.


If FPGA development is done using silicon purchased before the start of the project, on an air-gapped machine, and without using any pre-made IP, then some bounds can clearly be placed on the adversary's capability.

I have not seen any formal threat modeling studies on this subject, although I haven't spent a ton of time looking for them due to research obligations. If anyone is aware of published work in this field I'm extremely interested.

by Andrew Zonenberg ( at January 16, 2015 09:32 PM

Why Apple's iPhone encryption won't stop NSA (or any other intelligence agency)

Recent news headlines have made a big deal of Apple encrypting more of the storage on their handsets, and claiming to not have a key. Depending on who you ask this is either a huge win for privacy, or a massive blow to intelligence collection and law enforcement capabilities. I'm going to try avoiding expressing any opinions of government policy here and focus on the technical details of what is and is not possible - and why disk encryption isn't as much of a major game-changer as people seem to think.

Matthew Green at Johns Hopkins wrote a very nice article on the subject recently, but there are a few points I feel it's worth going into more detail on.

The general case here is that of two people, Alice and Bob, communicating with iPhones while a third party, Eve, attempts to discover something about their communications.

First off, the changes in iOS 8 are encrypting data on disk. Voice calls, SMS, and Internet packets still cross the carrier's network in cleartext. These companies are legally required (by CALEA in the United States, and similar laws in other countries) to provide a means for law enforcement or intelligence to access this data.

In addition, if Eve can get within radio range of Alice or Bob, she can record the conversation off the air. Although the radio links are normally encrypted, many of these cryptosystems are weak and can be defeated in a reasonable amount of time by cryptanalysis. Numerous methods are available for executing man-in-the-middle attacks between handsets and cell towers, which can further enhance Eve's interception capabilities.

Second, if Eve is able to communicate with Alice or Bob's phone directly (via Wi-Fi, SMS, MITM of the radio link, MITM further upstream on the Internet, physical access to the USB port, or using spearphishing techniques to convince them to view a suitably crafted e-mail or website) she may be able to use an 0day exploit to gain code execution on the handset and bypass any/all encryption by reading the cleartext out of RAM while the handset is unlocked. Although this does require that Eve have a staff of skilled hackers to find an 0day, or deep pockets to buy one, when dealing with a nation/state level adversary this is hardly unrealistic.

Although this does not provide Eve with the ability to exfiltrate the device  encryption key (UID) directly, this is unnecessary if cleartext can be read directly. This is a case of the general trend we've been seeing for a while - encryption is no longer the weakest link, so attackers figure out ways to get around it rather than smash through.

Third, in many cases the contents of SMS/voice are not even required. If the police wish to geolocate the phone of a kidnapping victim (or a suspect) then triangulation via cell towers and the phone's GPS, using the existing e911 infrastructure, may be sufficient. If intelligence is attempting to perform contact tracing from a known target to other entities who might be of interest, then the "who called who when" metadata is of much more value than the contents of the calls.

There is only one situation where disk encryption is potentially useful: if Alice or Bob's phone falls into Eve's hands while locked and she wishes to extract information from it. In this narrow case, disk encryption does make it substantially more difficult, or even impossible, for Eve to recover the cleartext of the encrypted data.

Unfortunately for Alice and Bob, a well-equipped attacker has several options here (which may vary depending on exactly how Apple's implementation works; many of the details are not public).

If the Secure Enclave code is able to read the UID key, then it may be possible to exfiltrate the key using software-based methods. This could potentially be done by finding a vulnerability in the Secure Enclave (as was previously done with the TrustZone kernel on Qualcomm Android devices to unlock the bootloader). In addition, if Eve works for an intelligence agency, she could potentially send an NSL to Apple demanding that they write firmware, or sign an agency-provided image, to dump the UID off a handset.

In the extreme case, it might even be possible for Eve to compromise Apple's network and exfiltrate the certificate used for signing Secure Enclave images. (There is precedent for this sort of attack - the authors of Stuxnet appear to have stolen a driver-signing certificate from Realtek.)

If Apple did their job properly, however, the UID is completely inaccessible to software and is locked up in some kind of on-die hardware security module (HSM). This means that even if Eve is able to execute arbitrary code on the device while it is locked, she must bruteforce the passcode on the device itself - a very slow and time-consuming process.

In this case, an attacker may still be able to execute an invasive physical attack. By depackaging the SoC, etching or polishing down to the polysilicon layer, and looking at the surface of the die with an electron microscope the fuse bits can be located and read directly off the surface of the silicon.

E-fuse bits on polysilicon layer of an IC (National Semiconductor DMPAL16R). Left side and bottom right fuses are blown, upper right is conducting. (Note that this is a ~800nm process, easily readable with an optical microscope. The Apple A7 is made on a 28nm process and would require an electron microscope to read.) Photo by John McMaster, CC-BY
Since the key is physically burned into the IC, once power is removed from the phone there's no practical way for any kind of self-destruct to erase it. Although this would require a reasonably well-equipped attacker, I'm pretty confident based on my previous experience that I could do it myself, with equipment available to me at school, if I had a couple of phones to destructively analyze and a few tens of thousands of dollars to spend on lab time. This is pocket change for an intelligence agency.

Once the UID is extracted, and the encrypted disk contents dumped from the flash chips, an offline bruteforce using GPUs, FPGAs, or ASICs could be used to recover the key in a fairly short time. Some very rough numbers I ran recently suggest that an 6-character upper/lowercase alphanumeric SHA-1 password could be bruteforced in around 25 milliseconds (1.2 trillion guesses per second) by a 2-rack, 2500-chip FPGA cluster costing less than $250,000. Luckily, the iPhone uses an iterated key-derivation function which is substantially slower.

The key derivation function used on the iPhone takes approximately 50 milliseconds on the iPhone's CPU, which comes out to about 70 million clock cycles. Performance studies of AES on a Cortex-A8 show about 25 cycles per byte for encryption plus 236 cycles for the key schedule. The key schedule setup only has to be done once so if the key is 32 bytes then we have 800 cycles per iteration, or about 87,500 iterations.

It's hard to give exact performance numbers for AES bruteforcing on an FPGA without building a cracker, but if pipelined to one guess per clock cycle at 400 MHz (reasonable for a modern 28nm FPGA) an attacker could easily get around 4500 guesses per second per hash pipeline. Assuming at least two pipelines per FPGA, the proposed FPGA cluster would give 22.5 million guesses per second - sufficient to break a 6-character case-sensitive alphanumeric password in around half an hour. If we limit ourselves to lowercase letters and numbers only, it would only take 45 seconds instead of the five and a half years Apple claims bruteforcing on the phone would take. Even 8-character alphanumeric case-sensitive passwords could be within reach (about eight weeks on average, or faster if the password contains predictable patterns like dictionary words).

by Andrew Zonenberg ( at January 16, 2015 09:31 PM

Hello 2015 - New job, new projects, and more!

A few months ago, I wrote about some of my pending projects. Several of them are actively running, some are done, and a few new ones cropped up :) I'll be making a bunch of short posts in the next day or so on some of them.

It's been a busy fall so I haven't had time to post anything lately. I'm working hard on my thesis and plan to defend this spring. It's 84 pages and counting...

After I graduate I'll be packing up my lab and moving from Troy, NY to somewhere near Seattle. I'll be working as a senior security consultant in the hardware lab at IOActive, doing board, firmware, and silicon level reversing and security auditing, as well as original research... so you can expect some posts by me on their lab blog in the coming months. My personal lab will likely have a bit of downtime due to the move, although I'd like to get back up and running by the end of the summer at the latest.

I've also submitted a talk based on my CPLD reverse engineering work to REcon. I hinted earlier about the project but there's a lot more to it which hasn't been released publicly... details to come if the talk is accepted ;)

by Andrew Zonenberg ( at January 16, 2015 09:31 PM

Getting my feet wet with invasive attacks, part 2: The attack

This is part 2 of a 2-part series. Part 1, Target Recon, is here.

Once I knew what all of the wires in the ZIA did, the next step was to plan an attack to read signals out.

I decapped an XC2C32A with concentrated sulfuric acid and soldered it to my dev board to verify that it was alive and kicking.

Simple CR-II dev board with integrated FTDI USB-JTAG
After testing I desoldered the sample and brought it up to campus to introduce it to some 30 keV Ga+ ions.

I figured that all of the exposed packaging would charge, so I'd need to coat the sample with something. I normally used sputtered Pt but this is almost impossible to remove after deposition so I decided to try evaporated carbon, which can be removed nicely with oxygen plasma among other things.

I suited up for the cleanroom and met David Frey, their resident SEM/FIB expert, in front of the Zeiss 1540 FIB system. He's a former Zeiss engineer who's very protective of his "baby" and since I had never used a FIB before there was no way he was going to let me touch his, so he did all of the work while I watched. (I don't really blame him... FIB chambers are pretty cramped and it's easy to cause expensive damage by smashing into something or other. Several SEMs I've used have had one detector or another go offline for repair after a more careless user broke something.)

The first step was to mill a hole through the 900 nm or so of silicon nitride overglass using the ion beam.

Newly added via, not yet filled
Once the via was drilled and it appeared we had made contact with the signal trace, it was time to backfill with platinum. The video below is sped up 10x to avoid boring my readers ;)

Metal deposition in a FIB is basically CVD: a precursor gas is injected into the chamber near the sample and it decomposes under the influence of beam-generated secondary electrons.

Once the via was filled we put down a large (20 μm square) square pad we could hit with an electrical probe needle.

Probe pad
Once everything was done and the chamber was vented I removed the carbon coating with oxygen plasma (the cleanroom's standard photoresist removal process), packaged up my sample, went home, and soldered it back to the board for testing. After powering it up... nothing! The device was as dead as a doornail, I couldn't even get a JTAG IDCODE from it.

I repeated the experiment a week or two later, this time soldering bare stub wires to the pins so I could test by plugging the chip into a breadboard directly. This failed as well, but watching my benchtop power supply gave me a critical piece of information: while VCCINT was consuming the expected power (essentially zero), VCCIO was leaking by upwards of 20 mA.

This ruled out beam-induced damage as I had not been hitting any of the I/O circuitry with the ion beam. Assuming that the carbon evaporation process was safe (it's used all the time on fragile samples, so this seemed a reasonably safe assumption for the time being), this left only the plasma clean as the potential failure point.

I realized what was going on almost instantly: the antenna effect. The bond wire and leadframe connected to each pad in the device was acting as an antenna and coupling some of the 13.56 MHz RF energy from the plasma into the input buffers, blowing out the ESD diodes and input transistors, and leaving me with a dead chip.

This left me with two possible ways to proceed: removing the coating by chemical means (a strong oxidizer could work), or not coating at all. I decided to try the latter since there were less steps to go wrong.

Somewhat surprisingly, the cleanroom staff had very limited experience working with circuit edits - almost all of their FIB work was process metrology and failure analysis rather than rework, so they usually coated the samples.

I decided to get trained on RPI's other FIB, the brand-new FEI Versa 3D. It's operated by the materials science staff, who are a bit less of the "helicopter parent" type and were actually willing to give me hands-on training.

The Versa can do almost everything the older 1540 can do, in some cases better. Its one limitation is that it only has a single-channel gas injection system (platinum) while the 1540 is plumbed for platinum, tungsten, SiO2, and two gas-assisted etches.

After a training session I was ready to go in for an actual circuit edit.

FIB control panel
The Versa is the most modern piece of equipment I've used to date: it doesn't even have the classical joystick for moving the stage around. Almost everything is controlled by the mouse, although a USB-based knob panel for adjusting magnification, focus, and stigmators is still provided for those who prefer to turn something with their fingers.

Its other nice feature is the quad-image view which lets you simultaneously view an ion beam image, an e-beam image, the IR camera inside the chamber (very helpful for not crashing your sample into a $10,000 objective lens!), and a navigation camera which displays a top-down optical view of your sample.

The nav-cam has saved me a ton of time. On RPI's older JSM-6335 FESEM, the minimum magnification is fairly high so I find myself spending several minutes moving my sample around the chamber half-blind trying to get it under the beam. With the Versa's nav-cam I'm able to set up things right the first time.

I brought up both of the beams on the aluminum sample mounting stub, then blanked them to try a new idea: Move around the sample blind, using the nav-cam only, then take single images in freeze-frame mode with one beam or the other. By reducing the total energy delivered to the sample I hoped to minimize charging.

This strategy was a complete success, I had some (not too severe) charging from the e-beam but almost no visible charging in the I-beam.

The first sample I ran on the Versa was electrically functional afterwards, but the probe pad I deposited was too thin to make reliable contact with. (It was also an XC2C64A since I had run out of 32s). Although not a complete success, it did show that I had a working process for circuit edits.

After another batch of XC2C32As arrived, I went up to campus for another run. The signal of interest was FB2_5_FF: the flipflop for function block 2 macrocell 5. I chose this particular signal because it was the leftmost line in the second group from the left and thus easy to recognize without having to count lines in a bus.

The drilling went flawlessly, although it was a little tricky to tell whether I had gone all the way to the target wire or not in the SE view. Maybe I should start using the backscatter detector for this?

Via after drilling before backfill
I filled in the via and made sure to put down a big pile of Pt on the probe pad so as to not repeat my last mistake.

The final probe pad, SEM image
Seen optically, the new pad was a shiny white with surface topography and a few package fragments visible through it.

Probe pad at low mag, optical image
At higher magnification a few slightly damaged CMP filler dots can be seen above the pad. I like to use filler metal for focusing and stigmating the ion beam at milling currents before I move to the region of interest because it's made of the same material as my target, it's something I can safely destroy, and it's everywhere - it's hard to travel a significant distance on a modern IC without bumping into at least a few pieces of filler metal.

Probe pad at higher magnification, optical image. Note damaged CMP filler above pad.
I soldered the CPLD back onto the board and was relieved to find out that it still worked! The next step was to write some dummy code to test it out:

`timescale 1ns / 1ps
module test(clk_2048khz, led);

//Clock input
(* LOC = "P1" *) (* IOSTANDARD = "LVCMOS33" *)
input wire clk_2048khz;

//LED out
(* LOC = "P38" *) (* IOSTANDARD = "LVCMOS33" *)
output reg led = 0;

//Don't care where this is placed
reg[17:0] count = 0;
always @(posedge clk_2048khz)
count <= count + 1;

//Probe-able signal on FB2_5 FF at 2x the LED blink rate
(* LOC = "FB2_5" *) reg toggle_pending = 0;
always @(posedge clk_2048khz) begin
if(count == 0)
toggle_pending <= !toggle_pending;

//Blink the LED
always @(posedge clk_2048khz) begin
if(toggle_pending && (count == 0))
led <= !led;


This is a 20-bit counter that blinks a LED at ~2 Hz from a 2048 KHz clock on the board. The second-to-last stage of the counter (so ~4 Hz) is constrained to FB2_5, the signal we're probing.

After making sure things still worked I attached the board's plastic standoffs to a 4" scrap silicon wafer with Gorilla Glue to give me a nice solid surface I could put on the prober's vacuum chuck.

Test board on 4" wafer
Earlier today I went back to the cleanroom. After dealing with a few annoyances (for example, the prober with a wide range of Z axis travel, necessary for this test, was plugged into the electrical test station with curve tracing capability but no oscilloscope card) I landed a probe on the bond pad for VCCIO and one on ground to sanity check things. 3.3V... looks good.

Moving carefully, I lifted the probe up from the 3.3V bond pad and landed it on my newly added probe pad.

Landing a probe on my pad. Note speck of dirt and bent tip left by previous user. Maybe he poked himself mounting the probe?
It took a little bit of tinkering with the test unit to figure out where all of the trigger settings were, but I finally saw a ~1.8V, 4 Hz squarewave. Success!

Waveform sniffed from my probe pad
There's still a bit of tweaking needed before I can demo it to my students (among other things, the oscilloscope subsystem on the tester insists on trying to use the 100V input range, so I only have a few bits of ADC precision left to read my 1.8V waveform) but overall the attack was a success.

by Andrew Zonenberg ( at January 16, 2015 09:30 PM

New microscope bench

For a long time, my microscopes have lived on a folding plastic table in the corner of my lab. It was wobbling and causing blurry images, but I never got a chance to do something about it... until now.

It's been replaced it with a custom-built wooden workbench. I made it big and heavy on purpose to reduce vibrations: The tabletop is 3/4" flooring-grade plywood and the legs are 4x4" posts resting on top of rubber pads. (I actually made two of them, because one of my roommates liked the design and wanted one for himself. )

I don't have any photos of the early build process but here's one of the tabletop and legs before the shelving was installed. It was annoying having to stain and assemble it in the middle of my living room... I can't wait to have a garage or basement to work in!

EDIT: A friend who helped me with the build just sent me this one.
Test-fitting some of the lumber
New workbench, partially assembled
After installing shelves
In its final resting place

The stain turned out a little darker than I anticipated but it actually matched the paneling in my apartment surprisingly well so no complaints there. The 16 square feet of shelf space let me tidy up the lab and remove a lot of miscellaneous junk that had been sitting around with nowhere to go.

With equipment installed
It was a lot of fun to make - I had almost forgotten how much I enjoyed woodworking. Maybe my next piece of furniture will be something more pretty and less utilitarian in nature? (Making it out of hardwood instead of construction lumber, and having better tools, would help too...)

by Andrew Zonenberg ( at January 16, 2015 09:25 PM

FPGA cluster updates

Although the "raised floor" design for my FPGA cluster looked cool, it really didn't scale. My entire desk was full, there was very limited room for new hardware, and the boards kept getting dusty. To make matters worse, long wires were needed to connect everything and it was difficult to manage them all.

Original FPGA cluster

I ended up moving forward with the plan I came up with a few months ago and my FPGA cluster is now living on the 19" rack in my living room.

The first step was to laser-cut acrylic frames for each board (or several boards, if they were small enough) that would slide into the card guides.

In the photo below you can see nodes lx9mini0 and lx9mini2 (lx9mini1 was being used for something else at the time so I put it on another card later on) on a clear 1/16" acrylic sheet cut to standard Eurocard dimensions. The clear faceplate was later replaced with an opaque black one because I think it looks better that way.

Spartan-6 FPGA boards on a 3U blade card

My existing USB hubs weren't well suited to the rackmount form factor so I built a new one. This is a ten-port USB 2.0 hub with two front-panel ports (for keyboard and mouse) and eight back-side ports for internal connections. It consists of three 4-port Cypress hub chips in a tree, plus the associated PMICs. For extra fun I threw on an XC2C128 CPLD with a SPI headers so that I could potentially toggle power to individual ports remotely over SPI, but as of now this functionality isn't being used.

As with my previous hub designs all ports are overcurrent protected. I also added external ESD clamp diodes (RCLAMP0514M) to each data line after an killing a previous hub by zapping it while plugging my phone in to charge.

The port indicator LEDs are off in the idle state, green when a device has enumerated successfully, blinking green when a device is detected but fails to enumerate, and red for an overcurrent fault.

3U x 4HP USB hub blade

I went with my original plan of racking the Atlys boards in one of my empty 1U cases and the AC701 in another. The boards are screwed into standoffs which are attached to the case by cyanoacrylate adhesive.

Nodes atlys0 and atlys1 being installed in a 1U server case. JTAG and Ethernet cables are not yet installed.

I then installed my PDU board, the BeagleBone Black, the USB hub, and all of the smaller FPGA boards I was using for my research in the rack. Since I was moving the second 24-port switch from my desk to the rack I hooked the two together with a short run of multimode fiber. Fiber was overkill for the application but I had SFP ports sitting around unused and I was running out of copper interfaces...

The standard I've decided on for all new designs in the near future is as follows:
  • Boards are to be 100mm tall (3U Eurocard height)
  • Component keepouts along the top and bottom edges as specified by the Eurocard standard for card guides
  • Faceplate mounting holes on the front panel as specified by the Eurocard standard
  • 5V DC center-high power via standard 5.5mm barrel jack on the back edge
  • Network jacks and indicator LEDs on the front edge
  • JTAG, USB, and other I/O on the back edge

My current rack
The current FPGA cluster
Back side of the FPGA cluster midway through rack install. The USB cable coming out the bottom was temporary for testing until I could find a right-angle one.
All of the cards have nice shiny black faceplates except my 4-port switch prototype from a few years ago - the laser cutter on campus was unavailable the day I wanted to get the faceplate made and I never got around to doing it.

I still have quite a bit more gear to rack - several Spartan-6 boards I'm not actively using in my research, a Raspberry Pi, a Parallella, and a large number of PIC MCU development boards of various types. This will fill the current card rack and then some, so I left 3U of empty space below this one for expansion. The Parallella runs hot so I ordered a 1U fan tray which will be installed later today.

The 1U blank above the card rack is there for airflow reasons (the fan will blow upwards and exhaust air needs some space to blow out) but I can still fit stuff that doesn't block airflow. Early next week I'll be replacing it with a 16-port LC-LC patch panel so that I can terminate fiber runs to various devices on the rack (such as the AC701 board, which currently has a 2m run of multimode going around the side of the rack because there's no good way to get fiber from back to front).

by Andrew Zonenberg ( at January 16, 2015 09:25 PM

GERALD microscope control system

While my optical microscopes are capable of sufficient resolution for imaging larger-process ICs, taking massive die images (this one, for a comparatively small 3.2mm^2 die, is about 0.6 gigapixels) has been beyond my capability because I have better things to do with my time than sit in the lab for a week turning the stage knob a little, clicking a button to take a picture, turning the knob a little...

John has a computer-controlled microscope that he uses for large imaging jobs and it seemed like it'd be a good idea to make one of my own. The first step was to write some control software. John's user interface is very much a "programmer's GUI" and I figured something with a bit more eye candy wouldn't be too hard to do.

The result was a system called GERALD. (I couldn't think of a name for it, asked my girlfriend for suggestions, and this was the first she came up with...)

The prototype is using my old AmScope microscope because I didn't want to take my main Olympus out of service for an extended period of time during development. Yes, I'm aware that the "support" structure under the stage isn't very rigid... this is a software development testbed and I won't be using this microscope in the final deployment so there's no sense wasting time machining nice aluminum brackets. I just have to be careful not to move around too much when using it ;)

Prototype GERALD system on my desk. The breadboarded MCU is generating step/direction pulses from USB-serial commands and sending them to the stepper controller under the textbook.

The camera in use is an AmScope MD1900. It uses a proprietary USB protocol and only has Windows driver support, and I try to run a 100% Linux shop. This was a problem... In keeping with John's unofficial motto "Open source by any means necessary", I corrected the problem ;)

A bit of Wireshark and libusb coding later, I had a basic working driver. The protocol is closely related (but not identical) to the MU800 that John reversed a few months ago, which helped me get my feet in the door. There's a bit of trivial obfuscation (XORing control transactions with 0x55aa or 0x5aa5) which I fail to understand... The average user isn't going to notice anything, and anyone with the skills to reverse engineer USB transactions or a kernel driver will see right through it, so why bother?

I plan to try making a kernel V4L2 driver in the future but for now it works so it's not a huge priority.

The current GERALD system is very much a WIP, most basic features are there but automated image capture and some other things aren't implemented yet.

GERALD UI overview
The basic UI layout is modeled in large part on the FEI Versa FIB's control system, which is currently my favorite control system for either optical or electron microscopes.

The upper left panel is an overview of the current camera feed, scaled down to fit. The upper right view, meanwhile, shows the center of the feed at native (1:1 pixel) resolution. Both views have a footer that includes a scale bar, date/time, the objective in use, and magnification. (The magnification is the actual value based on calibration of the camera and my 24" 1080p display.)

The lower right view is a webcam pointed at the sample stage. I haven't had the time to make a bracket to hold it in the right spot so it's just sitting on my desk for now. This is the equivalent of a SEM chamber cam and I hope to eventually use it to avoid crashing the sample into the objective in full-remote-control operation.

The lower left view is a navigation display showing the current sample. Unlike the Versa's nav-cam (a single static image taken of the stage when the sample is loaded) my navigation view is a composite of actual microscope images. Every time a new video frame comes in when the stage isn't moving (to avoid motion blur) it is plotted in the navigation view at the current physical position of the stage.

As of now the navigation view is non-interactive; all it does is show the current field of view on the sample. My plan is to support clicking to move the stage to a specific point, as well as drawing to define a rectangular area for step-and-repeat imaging.

In typical use I envision the user moving to the upper left and bottom right corner of the sample manually with the joystick, then selecting an objective and drawing a box around the sample to initiate a high-resolution imaging run.

In order for all of this to work properly, the system must be calibrated. The first step is to calibrate the camera so that it knows how large each pixel is. I've done this manually in the past by taking pictures of a calibration slide and counting pixels, but it's high time I automated the process.

GERALD pixel size calibration
The algorithm is quite simple and is designed to work with the pattern on my calibration slide (10um pitch short lines and 50um pitch long lines) only. As of now the are limited sanity checks so if there's no calibration slide in view the results can be somewhat strange :)
  • Convert the image to grayscale using NTSC color weights
  • Compute a gray-level histogram
  • Median-filter the histogram to smooth out spikes and find peaks
  • The distribution should be approximately bimodal (white lines on dark background). Take the mean of the peaks and threshold the image to binary.
  • Do a median filter on the binary image to smooth out noise
  • Scan across the image horizontally, one scan line at a time. Note the start and end locations of each white area. If the width is too small (less than two pixels) or too large (more than 10% of the screen) discard it. Otherwise save the width and centroid as a slice down a potential line.
  • For each slice, check if the next row down has a line slice within a couple of pixels. If so, add it to the growing polyline and remove it from the list of candidates.
  • Fit a line segment to the points in each polyline using least-squares, set its width to the mean of all slices used
  • Extend each line segment until it hits the edge of the image. If the projected line gets within a very short distance of BOTH endpoints of a second segment, the two are collinear so merge them. (This will smooth out gaps in the detected lines from dust specks etc.) The resulting lines are plotted in green in the calibration view.
  • Project the lines onto the X axis and sort them from left to right.
  • For each line from left to right, measure the perpendicular distance to the next one over (displayed in red in the calibration view).
  • Compute the median of these line lengths. This is considered to be the pitch of the lines, in pixels.
  • Find the median width of the lines.
  • If the pitch is less than three times the line width, we're looking at fine-pitch lines (10 μm pitch). Otherwise the fine pitch lines were too small to resolve and we're looking at coarse pitch (50 μm).
  • Compute the physical size of one camera pixel from the known physical pitch and pixel pitch.
Now that that the camera has been calibrated, we know how big a camera pixel is - but we don't have the scaling and rotation factors needed to transform between camera and stage coordinates. This is the second half of the calibration.

Rather than trying to compute a full transformation matrix, the current code simply represents a motor step for each motor as a 2-vector describing the distance (in nanometers, referenced to the camera axes) the stage moves during one motor step. We can then easily compute the distance moved by any number of motor steps as a linear combination of these two 2-vectors.

This algorithm is built on top of the same CV code used for the camera calibration.
  • Find the rightmost line on the calibration slide.
  • Locate the midpoint of it. (This is marked with a blue dot in the debug overlay.)
  • Check if this keypoint is in the left 1/4 of the camera's FOV.
  • If not, move left 50 steps, take a picture, and repeat until it is.
  • Record the position of the keypoint.
  • Move right, 50 steps at a time, until the keypoint is in the right 1/4 of the FOV.
  • Record the 2D distance the keypoint moved and divide by the number of steps taken to get the X axis step vector.
  • Repeat for the Y axis, except using 1/3 as the threshold instead of 2/3 since the Y axis FOV is smaller than X.

The system isn't quite finished but it's coming along nicely. Hopefully I'll have time in the next couple of months to finish the software, make a PCB for the control circuit, and machine brackets to hold all of the parts onto my Olympus scope.

by Andrew Zonenberg ( at January 16, 2015 09:25 PM

January 13, 2015

Video Circuits

Video Glitch Device

Here is a device I built for a fellow artist, there are a few shots of its slowly disrupting a shot of a grid to give you an idea off the effects it can produce.

by Chris ( at January 13, 2015 02:56 PM

Lorene Lavora

Some early computer art stills from 1983 by Lorene Lavora

by Chris ( at January 13, 2015 02:44 PM

Free Electrons

Actualités trimestrielles Free Electrons: janvier 2015

Cet article a été publié sur notre bulletin d’actualités trimestrielles.

L’équipe de Free Electrons vous présente ses meilleurs voeux pour la nouvelle année 2015. Que celle-ci soit pour vous pleine d’optimisme et d’énergie !

Nous profitons de cette occasion pour vous donner des nouvelles de nos activités de formation, de développement et de contribution.

Contributions au noyau Linux

Nous avons continué à travailler sur la prise en charge de plusieurs processeurs ARM dans le noyau Linux. Voici nos contributions aux versions les plus récentes :

  • 147 patches de Free Electrons inclus dans Linux 3.17, ce qui nous place au 14ème rang des sociétés en terme de nombres de patches. Voyez notre billet de blog au sujet de cette version.
  • 155 patches de Free Electrons inclus dans Linux 3.18, ce qui nous positionne également à la 14ème place. Plus de détails sur notre billet de blog.
  • Pour la version à venir (3.19), nous avons déjà rentré 196 patches.

Un des faits marquants a été la prise en charge du SoC SAMA5D4 d’Atmel dans le noyau Linux officiel, et ceci avant même que le processeur ne soit annoncé par Atmel ! Il s’agit d’un signe très positif pour les clients quand un processeur est pris en charge dans la version officielle du noyau dès la sortie du produit, plutôt que d’avoir à attendre plusieurs mois ou plusieurs années que la version de la communauté ait atteint un niveau de fonctionnalité suffisant.

Au passage, nous avons également rajouté le support du SoC SAMA5D3 d’Atmel à Xenomai, une extension temps-réel dur pour le noyau Linux. Grâce à cela, la carte SAMA5D3 Xplained d’Atmel peut maintenant fonctionner avec la version 2.6.x de Xenomai.

En plus de ces faits marquants, la plupart de nos contributions au noyau Linux étaient autour de la prise en charge de familles spécifiques de processeurs ARM : CPUs de Marvell EBU et Marvell Berlin, d’Atmel et d’Allwinner. Nous avons rajouté un nouveau pilote réseau pour certains processeurs de Marvell EBU, implémenté la prise en charge du SMP pour les processeurs Marvell Berlin, ajouté un contrôleur DMA pour les puces d’Allwinner, et abattu un gros travail de maintenance pour prendre en charge ces processeurs dans le noyau Linux officiel.

Contributions à Buildroot

Notre implication dans le projet Buildroot s’est poursuivie. Notre ingénieur Thomas Petazzoni a contribué 136 patches à la version 2014.11, ce qui fait de lui le deuxième contributeur en nombre de patches. Thomas se charge également de la maintenance du projet de façon plus régulière, en faisant la revue et en incluant des patches de contributeurs.

Contributions à OpenWRT

Nous avons également commencé depuis peu à contribuer au projet OpenWRT : configuration du noyau par defconfig, introduction d’une notion de carte pour prendre en charge différentes configurations de flash NAND pour chaque plateforme. Nous allons également bientôt publier la prise en charge de la plateforme Armada 385 de Marvell, et une amélioration du support des plateformes Marvell Armada 370 et XP.

Projets récents

En plus de nos contributions au noyau Linux qui sont visibles, nous avons également travaillé à des projets spécifiques à certains clients. En voici quelques détails :

  • Développement d’un BSP complet pour une plateforme sur mesure à base de TI AM335x : portage d’U-Boot, du noyau Linux et développement d’un système de fichiers généré par Yocto. Qt5 et OpenGL sont utilisés pour l’application graphique. Il fallait prendre en charge une configuration audio assez complexe, ainsi que de nombreuses interfaces standard (USB hôte et device, CAN, affichage, etc.)
  • Développement d’un BSP complet pour un système de téléphonie à base de Marvell Armada 375. En plus de porter le noyau Linux sur cette plateforme, nous avons aussi créé des pilotes DAHDI pour pouvoir exploiter le matériel depuis Asterisk.
  • Tests de robustesse sur mémoire flash NAND et sur UBI pour une plateforme à base de Freescale i.MX28. Nous avons amélioré le pilote du contrôleur de NAND, créé un nouvel outil MTD pour générer des bitflips (faire changer d’état certains bits), et procédé à tes tests de résistance long terme aux coupures d’alimentation sur la configuration UBIFS du client, pour garantir la fiabilité de la plateforme. Voir nos améliorations du pilote du noyau et le nouvel utilitaire nandflipbits.
  • Mise à jour d’un pilote ADC existant d’une plateforme client spécifique pour utiliser le sous-système Industrial Input Output (IIO) du noyau, plus moderne.

Conférences : FOSDEM, Embedded World et Embedded Linux Conference

Plusieurs ingénieurs de Free Electrons participeront à la conférence FOSDEM, qui se tiendra à Bruxelles les 30 janvier et le 1er février. En outre, Thomas Petazzoni participera également au Buildroot Developers Meeting qui aura lieu juste après le FOSDEM dans les locaux de Google à Bruxelles.

Free Electrons participera aussi au salon Embedded World du 24 au 26 février à Nuremberg. Nous serons accueillis sur le stand d’Atmel et visiterons également les stands des autres exposants. Cela sera une bonne occasion pour nos clients européens de nous rencontrer et de découvrir nos services d’ingénierie et de formation. En particulier, vous pourrez rencontrer nos ingénieurs Alexandre Belloni, Thomas Petazzoni (directeur technique), Michael Opdenacker (dirigeant) ainsi qu’Anja Roubin, la nouvelle responsable de nos services de formation.

Une fois de plus cette année, la quasi-totalité de l’équipe d’ingénierie de Free Electrons (7 personnes) participera à l’édition 2015 de l’Embedded Linux Conference du 23 au 25 mars 2015 à San Jose en Californie. Nous avons proposé plusieurs présentations, mais notre présence ne dépendra pas du nombre de présentations qui seront finalement acceptées. La participation à cette conférence, ainsi qu’à son édition européenne en automne, est très importante pour nous. Elle nous permet de ne passer à côter d’aucun projet intéressant dans la communauté technique, et surtout de renforcer les liens avec les autres développeurs. Ainsi, nous pouvons rester de bons formateurs techniques avec une expérience et un savoir de valeur à partager. Les relations étroites avec d’autres développeurs de la communauté (en particulier avec les mainteneurs des projets) sont également précieuses lorsque nos clients nous demandent de rajouter la prise en charge d’un matériel particulier ou des fonctionnalités aux versions officielles de projets comme le noyau Linux.

Ressources de documentation technique libres et gratuites

Depuis la dernière édition de notre bulletin, nous avons livré nos premières sessions de formation sur le développement Linux embarqué avec Yocto Project et OpenEmbedded, et nous avons publié l’intégralité des supports de formation. Comme à l’accoutumée, ces supports sont destinés à être utilisés également par des personnes se formant par elles-mêmes. Vous n’avez qu’à vous procurer une carte Beaglebone Black, lire nos présentations et essayer de faire les travaux pratiques !

Notre ingénieur Maxime Ripard a aussi partagé de la documentation sur le sous-système DMAEngine du noyau Linux.

Prochaines sessions de formation – A Paris également !

La nouveauté est que nous organisons maintenant des sessions de formation inter-entreprises à Paris, en plus de celles que nous animons à Toulouse, à Avignon et à Lyon. Nous commençons avec nos formations sur Linux embarqué et sur Yocto, mais nos autres thématiques seront également proposées.

Voici ainsi la liste de nos prochaines sessions :

Voir notre page sessions et dates pour plus de détails. Bien-sûr, nous pouvons animer nos sessions de formation dans vos propres locaux, et ceci, partout dans le monde. N’hésitez pas à nous contacter pour un devis.

Si vous êtes intéressé par des nouvelles plus fréquentes de Free Electrons, vous pouvez aussi nous suivre sur Twitter, Google+ et sur LinkedIn.

by Michael Opdenacker at January 13, 2015 02:25 PM

January 09, 2015

Richard Hughes, ColorHug

Finding hidden applications with GNOME Software

When you do a search in GNOME Software it returns any result of any application with AppStream metadata and with a package name it can resolve in any remote repository. This works really well for software you’re installing from the main distribution repos, but less well for some other common cases.

Lets say I want to install Google Chrome so that my 2 year old daughter can ring me on hangouts, and tell me that dinner is ready. Lets search for Chrome on my Fedora Rawhide system.

Screenshot from 2015-01-09 16:37:45

Whoa! Wait, how did you do that? First, this exists in /etc/yum.repos.d/google-chrome.repo — the important line being enabled_metadata=1. This means “download just the metadata even when enabled=0” and means we can get information about what packages are available in repos we are not enabling by default for legal or policy reasons.


We’ve also got a little XML document with the AppStream metadata (just the long description and keywords) called /usr/share/app-info/xmls/google-chrome.xml which could be included in the usual vendor-supplied fedora-22.xml if that’s what we want to do.

Screenshot from 2015-01-09 16:40:09

The other awesome feature this unlocks is when we have addon repos that are not enabled by default. For instance, my utopia repo full of super new free software applications could be included in Fedora, and if the user types in the search word we ask if the repo should be enabled. This would solve a lot of use cases if we could ship .repo files for a few popular COPRs of stuff we don’t (yet) ship in Fedora, but are otherwise free and open source software.

Screenshot from 2015-01-09 16:51:00

All the components to do this are upstream in Fedora 22 (you need a new librepo, libhif, PackageKit, libappstream-glib and gnome-software, phew!) although I’m sure we’ll be tweaking the UI and UX before Fedora 22 is released. Comments welcome.


by hughsie at January 09, 2015 08:01 PM

Uwe Hermann

My GPG key transition to a 4096-bit key

This is long overdue, so here goes:

Hash: SHA1,SHA512

I'm transitioning my GPG key from an old 1024D key to a new 4096R key.

The old key will continue to be valid for some time, but I prefer
all new correspondance to be encrypted to the new key, and will be making
all signatures going forward with the new key.

This transition document is signed with both keys to validate the transition.

If you have signed my old key, I would appreciate signatures on my new
key as well, provided that your signing policy permits that without
re-authenticating me.

Old key:

pub   1024D/0x5DD5685778D621B4 2000-03-07
      Key fingerprint = 0F3C 34D1 E4A3 8FC6 435C  01BA 5DD5 6857 78D6 21B4

New key:

pub   4096R/0x1D661A372FED8F94 2013-12-30
      Key fingerprint = 9A17 578F 8646 055C E19D  E309 1D66 1A37 2FED 8F94

Version: GnuPG v1


The new key is available from keyservers, e.g. or others.

In other news: Yes, I've not been blogging much recently, will try to do updates more often. In the mean time, you can also refer to my Twitter account for random stuff or the new sigrok Twitter account for sigrok-related posts.

by Uwe Hermann at January 09, 2015 04:47 PM

Richard Hughes, ColorHug

GNOME MultiWriter 3.15.2

I’ve just released GNOME MultiWriter 3.15.2, which is the first release that makes it pretty much feature complete for me.

Reads and writes are now spread over root hubs to increase throughput. If you’ve got a hub with more than 7 ports and the port numbers don’t match the decals on the device please contact me for more instructions.

In this release I’ve also added the ability to completely wipe a drive (write the image, then NULs to pad it out to the size of the media) and made that and the verification step optional. We also now show a warning dialog to the user the very first time the application is used, and some global progress in the title bar so you can see the total read and write throughput of the application.

With this release I’ve now moved the source to and will do future releases to like all the other GNOME modules. If you see something obviously broken and you have GNOME commit access, please just jump in and fix it. The translators have done a wonderful job using transifex, but now I’m leaving the just-as-awesome GNOME translator teams handle localisation.

If you’ve got a few minutes, and want to try it out, you can clone the git repo or install a package for Fedora.


by hughsie at January 09, 2015 01:03 PM

January 08, 2015


Noname MMBT3904 - npn BJT transistor : weekend die-shot

On the first glance it looks very similar to NXP PMST3904, but topology is clearly redrawn. Notice sketchy borders of metalization - apparently they were using lift-off process for metalization instead of plasma etching. But for discrete transistors this might still be acceptable for good yield.

Die size 290x291 µm, which is slightly larger than NXP's one.

January 08, 2015 03:12 PM

Video Circuits

Denise Gallant, Satellite 1986

"Inspired by the book "Voyage to Acturus", Satellite was released in 1986. This 34 minute video was produced by Denise Gallant with a grant from the American Film Institute and National Endowment For The Arts."

by Chris ( at January 08, 2015 05:09 AM

January 05, 2015

Richard Hughes, ColorHug

GNOME MultiWriter and Large Hubs

Today I released the first version of GNOME MultiWriter, and built it for Rawhide and F21. It’s good enough for a first release, although there are still a few things left to do. The most important now is probably the self-profiling mode so that we know the best number of parallel threads to use for the read and the write. I want this to Just Work without any user interaction, but I’ll have to wait for my shipment of USB drives to arrive before I can add that functionality.

Also important to the UX is how we display failed devices. Most new USB devices accept the ISO image without a fuss, but the odd device will disconnect before completion or throw a write error. In this case it’s important to know which device is the one that belongs in the rubbish bin. This is harder than you think, as the electrical port number is not always what matches the decal on the plastic box.

For my test system I purchased a 10-port USB hub. I was interested to know how the vendor implemented this, as I don’t know of a SOIC chip that can drive more than 7 ports. It turns out, my 10-port hub is actually a 4-port hub, with a 7-port hub attached to the last port of the first hub. The second hub was also wired 1,2,3,4,7,6,5 rather than 1,2,3,4,5,6,7. This could cause my dad some issues when we tell him that device #5 needs removing.

I’ve merged some code into GNOME MultiWriter to work around this, but if you’ve got a hub with >7 ports I’d be interested to know if this works for you, or if we need to add some more VID/PID matches. If you do try this out you need libgusb from git master today. Helpfully gnome-multi-writer outputs quirk info to the command line if you use --verbose, so that makes debugging this stuff easier.

by hughsie at January 05, 2015 10:48 PM

January 02, 2015

Richard Hughes, ColorHug

Introducing GNOME MultiWriter

I spent last night writing a GNOME application to duplicate a ton of USB devices. I looked at mdcp, Clonezilla and also just writing something loopy in bash, but I need something simple my dad could use for a couple of hours a week without help.

It’s going to be super useful for me when I start sending our LiveUSB disks in the ColorHug box (rather than LiveCDs) and possibly useful to other people just wanting to copy a USB drive for QA testing on a small group of people, a XFCE live CD of Fedora rawhide for a code sprint, and that kind of thing.

GNOME MultiWriter allows you to write and verify an ISO file to up to 20 USB devices at once.

Screenshot from 2015-01-02 16:24:35

Bugs (and especially pull requests) accepted on GitHub; if there’s sufficient interest I’ll move the project to after a few releases.

by hughsie at January 02, 2015 04:45 PM

Dan Reetz

Struggling to pay rent. GoPro to the rescue.

Like any reasonable person, my landlord wants a check shoved under an unmarked door in the basement. Problem is, at night an on the weekends, the door leading to the basement is locked. I am so motivated to pay my landlord that I jammed a credit card into the door to try to open it. Anyone who has tried this “trick” knows that 9 times out of 10 you just break off the damned card and the door remains locked.

As it turns out, Dana was just sponsored by GoPro, so I have around 7 metric shit-tons of GoPro garbage in my workshop. The lexan-like material that they use in their packaging felt flexible enough to be a good shim.

2015-01-01 17.19.47

Dimensions unimportant. Big enough so you can grab on with two hands.
2015-01-01 17.08.42

2015-01-01 17.04.45

Start high above the latch and work down. Use force.
2015-01-01 17.05.16

2015-01-01 17.05.29

by danreetz at January 02, 2015 01:33 AM

December 28, 2014

Bunnie Studios

From Gongkai to Open Source

About a year and a half ago, I wrote about a $12 “Gongkai” cell phone (pictured above) that I stumbled across in the markets of Shenzhen, China. My most striking impression was that Chinese entrepreneurs had relatively unfettered access to cutting-edge technology, enabling start-ups to innovate while bootstrapping. Meanwhile, Western entrepreneurs often find themselves trapped in a spiderweb of IP frameworks, spending more money on lawyers than on tooling. Further investigation taught me that the Chinese have a parallel system of traditions and ethics around sharing IP, which lead me to coin the term “gongkai”. This is deliberately not the Chinese word for “Open Source”, because that word (kaiyuan) refers to openness in a Western-style IP framework, which this not. Gongkai is more a reference to the fact that copyrighted documents, sometimes labeled “confidential” and “proprietary”, are made known to the public and shared overtly, but not necessarily according to the letter of the law. However, this copying isn’t a one-way flow of value, as it would be in the case of copied movies or music. Rather, these documents are the knowledge base needed to build a phone using the copyright owner’s chips, and as such, this sharing of documents helps to promote the sales of their chips. There is ultimately, if you will, a quid-pro-quo between the copyright holders and the copiers.

This fuzzy, gray relationship between companies and entrepreneurs is just one manifestation of a much broader cultural gap between the East and the West. The West has a “broadcast” view of IP and ownership: good ideas and innovation are credited to a clearly specified set of authors or inventors, and society pays them a royalty for their initiative and good works. China has a “network” view of IP and ownership: the far-sight necessary to create good ideas and innovations is attained by standing on the shoulders of others, and as such there is a network of people who trade these ideas as favors among each other. In a system with such a loose attitude toward IP, sharing with the network is necessary as tomorrow it could be your friend standing on your shoulders, and you’ll be looking to them for favors. This is unlike the West, where rule of law enables IP to be amassed over a long period of time, creating impenetrable monopoly positions. It’s good for the guys on top, but tough for the upstarts.

This brings us to the situation we have today: Apple and Google are building amazing phones of outstanding quality, and start-ups can only hope to build an appcessory for their ecosystem. I’ve reviewed business plans of over a hundred hardware startups by now, and most of them are using overpriced chipsets built using antiquated process technologies as their foundation. I’m no exception to this rule – we use the Freescale i.MX6 for Novena, which is neither the cheapest nor the fastest chip on the market, but it is the one chip where anyone can freely download almost complete documentation and anyone can buy it on Digikey. This parallel constraint of scarce documentation and scarce supply for cutting edge technology forces Western hardware entrepreneurs to look primarily at Arduino, Beaglebone and Raspberry Pi as starting points for their good ideas.

Above: Every object pictured is a phone. Inset: detail of the “Skeleton” novelty phone. Image credits: Halfdan, Rachel Kalmar

Chinese entrepreneurs, on the other hand, churn out new phones at an almost alarming pace. Phone models change on a seasonal basis. Entrepreneurs experiment all the time, integrating whacky features into phones, such as cigarette lighters, extra-large battery packs (that can be used to charge another phone), huge buttons (for the visually impaired), reduced buttons (to give to children as emergency-call phones), watch form factors, and so forth. This is enabled because very small teams of engineers can obtain complete design packages for working phones – case, board, and firmware – allowing them to fork the design and focus only on the pieces they really care about.

As a hardware engineer, I want that. I want to be able to fork existing cell phone designs. I want to be able to use a 364 MHz 32-bit microcontroller with megabytes of integrated RAM and dozens of peripherals costing $3 in single quantities, instead of a 16 MHz 8-bit microcontroller with a few kilobytes of RAM and a smattering of peripherals costing $6 in single quantities. Unfortunately, queries into getting a Western-licensed EDK for the chips used in the Chinese phones were met with a cold shoulder – our volumes are too small, or we have to enter minimum purchase agreements backed by hundreds of thousands of dollars in a cash deposit; and even then, these EDKs don’t include all the reference material the Chinese get to play with. The datasheets are incomplete and as a result you’re forced to use their proprietary OS ports. It feels like a case of the nice guys finishing last. Can we find a way to still get ahead, yet still play nice?

We did some research into the legal frameworks and challenges around absorbing Gongkai IP into the Western ecosystem, and we believe we’ve found a path to repatriate some of the IP from Gongkai into proper Open Source. However, I must interject with a standard disclaimer: we’re not lawyers, so we’ll tell you our beliefs but don’t construe them as legal advice. Our intention is to exercise our right to reverse engineer in a careful, educated fashion to increase the likelihood that, if push comes to shove, the courts will agree with our actions. However, we also feel that shying away from reverse engineering simply because it’s controversial is a slippery slope: you must exercise your rights to have them. If women didn’t vote and black people sat in the back of the bus because they were afraid of controversy, the US would still be segregated and without universal suffrage.

Sometimes, you just have to stand up and assert your rights.

There are two broad categories of issues we have to deal with, patents and copyrights. For patents, the issues are complex, yet it seems the most practical approach is to essentially punt on the issue. This is what the majority of the open source community does, and in fact many corporations have similar policies at the engineering level. Nobody, as far as we know, checks their Linux commits for patent infringement before upstreaming them. Why? Among other reasons, it takes a huge amount of resources to determine which patents apply, and if one could be infringing; and even after expending those resources, one cannot be 100% sure. Furthermore, if one becomes very familiar with the body of patents, it amplifies the possibility that an infringement, should it be found, is willful and thus triple damages. Finally, it’s not even clear where the liability lies, particularly in an open source context. Thus, we do our best not to infringe, but cannot be 100% sure that no one will allege infringement. However, we do apply a license to our work which has a “poison pill” clause for patent holders that do attempt to litigate.

For copyrights, the issue is also extremely complex. The EFF’s Coders’ Rights Project has a Reverse Engineering FAQ that’s a good read if you really want to dig into the issues. The tl;dr is that courts have found that reverse engineering to understand the ideas embedded in code and to achieve interoperability is fair use. As a result, we have the right to study the Gongkai-style IP, understand it, and produce a new work to which we can apply a Western-style Open IP license. Also, none of the files or binaries were encrypted or had access controlled by any technological measure – no circumvention, no DMCA problem.

Furthermore, all the files were obtained from searches linking to public servers – so no CFAA problem, and none of the devices we used in the work came with shrink-wraps, click-throughs, or other end-user license agreements, terms of use, or other agreements that could waive our rights.

Thus empowered by our fair use rights, we decided to embark on a journey to reverse engineer the Mediatek MT6260. It’s a 364 MHz, ARM7EJ-S, backed by 8MiB of RAM and dozens of peripherals, from the routine I2C, SPI, PWM and UART to tantalizing extras like an LCD + touchscreen controller, audio codec with speaker amplifier, battery charger, USB, Bluetooth, and of course, GSM. The gray market prices it around $3/unit in single quantities. You do have to read or speak Chinese to get it, and supply has been a bit spotty lately due to high Q4 demand, but we’re hoping the market will open up a bit as things slow down for Chinese New Year.

For a chip of such complexity, we don’t expect our two-man team to be able to unravel its entirety working on it as a part-time hobby project over the period of a year. Rather, we’d be happy if we got enough functionality so that the next time we reach for an ATMega or STM32, we’d also seriously consider the MT6260 as an alternative. Thus, we set out as our goal to port NuttX, a BSD-licensed RTOS, to the chip, and to create a solid framework for incrementally porting drivers for the various peripherals into NuttX. Accompanying this code base would be original hardware schematics, libraries and board layouts that are licensed using CC BY-SA-3.0 plus an Apache 2.0 rider for patent issues.

And thus, the Fernvale project was born.

Fernvale Hardware

Compared to the firmware, the hardware reverse engineering task was fairly straightforward. The documents we could scavenge gave us a notion of the ball-out for the chip, and the naming scheme for the pins was sufficiently descriptive that I could apply common sense and experience to guess the correct method for connecting the chip. For areas that were ambiguous, we had some stripped down phones I could buzz out with a multimeter or stare at under a microscope to determine connectivity; and in the worst case I could also probe a live phone with an oscilloscope just to make sure my understanding was correct.

The more difficult question was how to architect the hardware. We weren’t gunning to build a phone – rather, we wanted to build something a bit closer to the Spark Core, a generic SoM that can be used in various IoT-type applications. In fact, our original renderings and pin-outs were designed to be compatible with the Spark ecosystem of hardware extensions, until we realized there were just too many interesting peripherals in the MT6260 to fit into such a small footprint.

Above: early sketches of the Fernvale hardware

We settled eventually upon a single-sided core PCB that we call the “Fernvale Frond” which embeds the microUSB, microSD, battery, camera, speaker, and Bluetooth functionality (as well as the obligatory buttons and LED). It’s slim, at 3.5mm thick, and at 57x35mm it’s also on the small side. We included holes to mount a partial set of pin headers, spaced to be compatible with an Arduino, although it can only be plugged into 3.3V-compatible Arduino devices.

Above: actual implementation of Fernvale, pictured with Arduino for size reference

The remaining peripherals are broken out to a pair of connectors. One connector is dedicated to GSM-related signals; the other to UI-related peripherals. Splitting GSM into a module with many choices for the RF front end is important, because it makes GSM a bona-fide user-installed feature, thus pushing the regulatory and emissions issue down to the user level. Also, splitting the UI-related features out to another board costs down the core module, so it can fit into numerous scenarios without locking users into a particular LCD or button arrangement.

Above: Fernvale system diagram, showing the features of each of the three boards

Fernvale Frond mainboard

Fernvale blade UI breakout

Fernvale spore AFE dev board

All the hardware source documents can be downloaded from our wiki.

As an interesting side-note, I had some X-rays taken of the MT6260. We did this to help us identify fake components, just in case we encountered units being sold as empty epoxy blocks, or as remarked versions of other chips (the MT6260 has variants, such as the -DA and the -A, the difference being how much on-chip FLASH is included).

X-ray of the MT6260 chip. A sharp eye can pick out the outline of multiple ICs among the wirebonds. Image credit: Nadya Peek

To our surprise, this $3 chip didn’t contain a single IC, but rather, it’s a set of at least 4 chips, possibly 5, integrated into a single multi-chip module (MCM) containing hundreds of wire bonds. I remember back when the Pentium Pro’s dual-die package came out. That sparked arguments over yielded costs of MCMs versus using a single bigger die; generally, multi-chip modules were considered exotic and expensive. I also remember at the time, Krste Asanović, then a professor at the MIT AI Lab now at Berkeley, told me that the future wouldn’t be system on a chip, but rather “system mostly on a chip”. The root of his claim is that the economics of adding in mask layers to merge DRAM, FLASH, Analog, RF, and Digital into a single process wasn’t favorable, and instead it would be cheaper and easier to bond multiple die together into a single package. It’s a race between the yield and cost impact (both per-unit and NRE) of adding more process steps in the semiconductor fab, vs. the yield impact (and relative reworkability and lower NRE cost) of assembling modules. Single-chip SoCs was the zeitgeist at the time (and still kind of is), so it’s interesting to see a significant datapoint validating Krste’s insight.

Reversing the Boot Structure

The amount of documentation made available to Shanzhai engineers in China seems to be just enough to enable them to assemble a phone and customize its UI, but not enough to do a full OS port. You eventually come to recognize that all the phones based on a particular chipset have the same backdoor codes, and often times the UI is inconsistent with the implemented hardware. For example, the $12 phone mentioned at the top of the post will prompt you to plug headphones into the headphone jack for the FM radio to work, yet there is no headphone jack provided in the hardware. In order to make Fernvale accessible to engineers in the West, we had to reconstruct everything from scratch, from the toolchain, to the firmware flashing tool, to the OS, to the applications. Given that all the Chinese phone implementations simply rely upon Mediatek’s proprietary toolchain, we had to do some reverse engineering work to figure out the boot process and firmware upload protocol.

My first step is always to dump the ROM, if possible. We found exactly one phone model which featured an external ROM that we could desolder (it uses the -D ROMless variant of the chip), and we read its contents using a conventional ROM reader. The good news is that we saw very little ciphertext in the ROM; the bad news is there’s a lot of compressed data. Below is a page from our notes after doing a static analysis on the ROM image.

0x0000_0000		media signature “SF_BOOT”
0x0000_0200		bootloader signature “BRLYT”, “BBBB”
0x0000_0800		sector header 1 (“MMM.8”)
0x0000_09BC		reset vector table
0x0000_0A10		start of ARM32 instructions – stage 1 bootloader?
0x0000_3400		sector header 2 (“MMM.8”) – stage 2 bootloader?
0x0000_A518		thunk table of some type
0x0000_B704		end of code (padding until next sector)
0x0001_0000		sector header 3( “MMM.8”) – kernel?
0x0001_0368		jump table + runtime setup (stack, etc.)
0x0001_0828		ARM thumb code start – possibly also baseband code
0x0007_2F04		code end
0x0007_2F05 – 0x0009_F0005	padding “DFFF”
0x0009_F006		code section begin “Accelerated Technology / ATI / Nucleus PLUS”
0x000A_2C1A		code section end; pad with zeros
0x000A_328C		region of compressed/unknown data begin
0x007E_E200		modified FAT partition #1
0x007E_F400		modified FAT partition #2

One concern about reverse engineering SoCs is that they have an internal boot ROM that is always run before code is loaded from an external device. This internal ROM can also have signature and security checks that prevent tampering with the external code, and so to determine the effort level required we wanted to quickly figure out how much code was running inside the CPU before jumping to external boot code. This task was made super-quick, done in a couple hours, using a Tek MDO4104B-6. It has the uncanny ability to take deep, high-resolution analog traces and do post-capture analysis as digital data. For example, we could simply probe around while cycling power until we saw something that looked like RS-232, and then run a post-capture analysis to extract any ASCII text that could be coded in the analog traces. Likewise, we could capture SPI traces and the oscilloscope could extract ROM access patterns through a similar method. By looking at the timing of text emissions versus SPI ROM address patterns, we were able to quickly determine that if the internal boot ROM did any verification, it was minimal and nothing approaching the computational complexity of RSA.

Above: Screenshot from the Tek MDO4104B-6, showing the analog trace in yellow, and the ASCII data extracted in cyan. The top quarter shows a zoomed-out view of the entire capture; one can clearly see how SPI ROM accesses in gray are punctuated with console output in cyan.

From here, we needed to speed up our measure-modify-test loop; desoldering the ROM, sticking it in a burner, and resoldering it onto the board was going to get old really fast. Given that we had previously implemented a NAND FLASH ROMulator on Novena, it made sense to re-use that code base and implement a SPI ROMulator. We hacked up a GPBB board and its corresponding FPGA code, and implemented the ability to swap between the original boot SPI ROM and a dual-ported 64kiB emulator region that is also memory-mapped into the Novena Linux host’s address space.

Block diagram of the SPI ROMulator FPGA

There’s a phone in my Novena! What’s that doing there?

A combination of these tools – the address stream determined by the Tek oscilloscope, rapid ROM patching by the ROMulator, and static code analysis using IDA (we found a SHA-1 implementation) – enabled us to determine that the initial bootloader, which we refer to as the 1bl, was hash-checked using a SHA-1 appendix.

Building a Beachhead

The next step was to create a small interactive shell which we could use as a beachhead for running experiments on the target hardware. Xobs created a compact REPL environment called Fernly which supports commands like peeking and poking to memory, and dumping CPU registers.

Because we designed the ROMulator to make the emulated ROM appear as a 64k memory-mapped window on a Linux host, it enables the use a variety of POSIX abstractions, such as mmap(), open() (via /dev/mem), read() and write(), to access the emulated ROM. xobs used these abstractions to create an I/O target for radare2. The I/O target automatically updates the SHA-1 hash every time we made changes in the 1bl code space, enabling us to do cute things like interactively patch and disassemble code within the emulated ROM space.

We also wired up the power switch of the phone to an FPGA I/O, so we could write automated scripts that toggle the power on the phone while updating the ROM contents, allowing us to do automated fuzzing of unknown hardware blocks.

Attaching a Debugger

Because of the difficulty in trying to locate critical blocks, and because JTAG is multiplexed with critical functions on the target device, an unconventional approach was taken to attach a debugger: xobs emulates the ARM core, and uses his fernly shell to reflect virtual loads and stores to the live target. This allows us to attach a remote debugger to the emulated core, bypassing the need for JTAG and allowing us to use cross-platform tools such as IDA on x86 for the reversing UI.

At the heart of this technique is Qemu, a multi-platform system emulator. It supports emulating ARM targets, specifically the ARMv5 used in the target device. A new machine type was created called “fernvale” that implements part of the observed hardware on the target, and simply passes unknown memory accesses directly to the device.

The Fernly shell was stripped down to only support three commands: write, read, and zero-memory. The write command pokes a byte, word, or dword into RAM on the live target, and a read command reads a byte, word, or dword from the live target. The zero-memory command is an optimization, as the operating system writes large quantities of zeroes across a large memory area.

In addition, the serial port registers are hooked and emulated, allowing a host system to display serial data as if it were printed on the target device. Finally, SPI, IRAM, and PSRAM are all emulated as they would appear on the real device. Other areas of memory are either trapped and funneled to the actual device, or are left unmapped and are reported as errors by Qemu.

The diagram above illustrates the architecture of the debugger.

Invoking the debugger is a multi-stage process. First, the actual MT6260 target is primed with the Fernly shell environment. Then, the Qemu virtual ARM CPU is “booted” using the original vendor image – or rather, primed with a known register state at a convenient point in the boot process. At this point, code execution proceeds on the virtual machine until a load or store is performed to an unknown address. Virtual machine execution is paused while a query is sent to the real MT6260 via the Fernly shell interface, and the load or store is executed on the real machine. The results of this load or store is then relayed to the virtual machine and execution is resumed. Of course, Fernly will crash if a store happens to land somewhere inside its memory footprint. Thus, we had to hide the Fernly shell code in a region of IRAM that’s trapped and emulated, so loads and stores don’t overwrite the shell code. Running Fernly directly out of the SPI ROM also doesn’t work as part of the initialization routine of the vendor binary modifies SPI ROM timings, causing SPI emulation to fail.

Emulating the target CPU allows us to attach a remote debugger (such as IDA) via GDB over TCP without needing to bother with JTAG. The debugger has complete control over the emulated CPU, and can access its emulated RAM. Furthermore, due to the architecture of qemu, if the debugger attempts to access any memory-mapped IO that is redirected to the real target, the debugger will be able to display live values in memory. In this way, the real target hardware is mostly idle, and is left running in the Fernly shell, while the virtual CPU performs all the work. The tight integration of this package with IDA-over-GDB also allows us to very quickly and dynamically execute subroutines and functions to confirm their purpose.

Below is an example of the output of the hybrid Qemu/live-target debug harness. You can see the trapped serial writes appearing on the console, plus a log of the writes and reads executed by the emulated ARM CPU, as they are relayed to the live target running the reduced Fernly shell.

bunnie@bunnie-novena-laptop:~/code/fernvale-qemu$ ./ 

~~~ Welcome to MTK Bootloader V005 (since 2005) ~~~

READ WORD Fernvale Live 0xa0010328 = 0x0000... ok
WRITE WORD Fernvale Live 0xa0010328 = 0x0800... ok
READ WORD Fernvale Live 0xa0010230 = 0x0001... ok
WRITE WORD Fernvale Live 0xa0010230 = 0x0001... ok
READ DWORD Fernvale Live 0xa0020c80 = 0x11111011... ok
WRITE DWORD Fernvale Live 0xa0020c80 = 0x11111011... ok
READ DWORD Fernvale Live 0xa0020c90 = 0x11111111... ok
WRITE DWORD Fernvale Live 0xa0020c90 = 0x11111111... ok
READ WORD Fernvale Live 0xa0020b10 = 0x3f34... ok
WRITE WORD Fernvale Live 0xa0020b10 = 0x3f34... ok

From this beachhead, we were able to discover the offsets of a few IP blocks that were re-used from previous known Mediatek chips (such as the MT6235 in the osmocomBB by searching for their “signature”. The signature ranged from things as simple as the power-on default register values, to changes in bit patterns due to the side effects of bit set/clear registers located at offsets within the IP block’s address space. Using this technique, we were able to find the register offsets of several peripherals.

Booting an OS

From here we were able to progress rapidly on many fronts, but our goal of a port of NuttX remained elusive because there was no documentation on the interrupt controller within the canon of Shanzhai datasheets. Although we were able to find the routines that installed the interrupt handlers through static analysis of the binaries, we were unable to determine the address offsets of the interrupt controller itself.

At this point, we had to open the Mediatek codebase and refer to the include file that contained the register offsets and bit definitions of the interrupt controller. We believe this is acceptable because facts are not copyrightable. Justice O’Connor wrote in Feist v. Rural (449 U.S. 340, 345, 349 (1991). See also Sony Computer Entm’t v. Connectix Corp., 203 F. 3d 596, 606 (9th Cir. 2000); Sega Enterprises Ltd. v. Accolade, Inc., 977 F.2d 1510, 1522-23 (9th Cir. 1992)) that

“Common sense tells us that 100 uncopyrightable facts do not magically change their status when gathered together in one place. … The key to resolving the tension lies in understanding why facts are not copyrightable: The sine qua non of copyright is originality”


“Notwithstanding a valid copyright, a subsequent compiler remains free to use the facts contained in another’s publication to aid in preparing a competing work, so long as the competing work does not feature the same selection and arrangement”.

And so here, we must tread carefully: we must extract facts, and express them in our own selection and arrangement. Just as the facts that “John Doe’s phone number is 555-1212” and “John Doe’s address is 10 Main St.” is not copyrightable, we need to extract facts such as “The interrupt controller’s base address in 0xA0060000”, and “Bit 1 controls status reporting of the LCD” from the include files, and re-express them in our own header files.

The situation is further complicated by blocks for which we have absolutely no documentation, not even an explanation of what the registers mean or how the blocks function. For these blocks, we reduce their initialization into a list of address and data pairs, and express this in a custom scripting language called “scriptic”. We invented our own language to avoid subconscious plagiarism – it is too easy to read one piece of code and, from memory, code something almost exactly the same. By transforming the code into a new language, we’re forced to consider the facts presented and express them in an original arrangement.

Scriptic is basically a set of assembler macros, and the syntax is very simple. Here is an example of a scriptic script:

#include "scriptic.h"
#include "fernvale-pll.h"

sc_new "set_plls", 1, 0, 0

  sc_write16 0, 0, PLL_CTRL_CON2
  sc_write16 0, 0, PLL_CTRL_CON3
  sc_write16 0, 0, PLL_CTRL_CON0
  sc_usleep 1

  sc_write16 1, 1, PLL_CTRL_UPLL_CON0
  sc_write16 0x1840, 0, PLL_CTRL_EPLL_CON0
  sc_write16 0x100, 0x100, PLL_CTRL_EPLL_CON1
  sc_write16 1, 0, PLL_CTRL_MDDS_CON0
  sc_write16 1, 1, PLL_CTRL_MPLL_CON0
  sc_usleep 1

  sc_write16 1, 0, PLL_CTRL_EDDS_CON0
  sc_write16 1, 1, PLL_CTRL_EPLL_CON0
  sc_usleep 1

  sc_write16 0x4000, 0x4000, PLL_CTRL_CLK_CONDB
  sc_usleep 1

  sc_write32 0x8048, 0, PLL_CTRL_CLK_CONDC
  /* Run the SPI clock at 104 MHz */
  sc_write32 0xd002, 0, PLL_CTRL_CLK_CONDH
  sc_write32 0xb6a0, 0, PLL_CTRL_CLK_CONDC

This script initializes the PLL on the MT6260. To contrast, here’s the first few lines of the code snippet from which this was derived:

// enable HW mode TOPSM control and clock CG of PLL control 

*PLL_PLL_CON2 = 0x0000; // 0xA0170048, bit 12, 10 and 8 set to 0 to enable TOPSM control 
                        // bit 4, 2 and 0 set to 0 to enable clock CG of PLL control
*PLL_PLL_CON3 = 0x0000; // 0xA017004C, bit 12 set to 0 to enable TOPSM control

// enable delay control 
*PLL_PLLTD_CON0= 0x0000; //0x A0170700, bit 0 set to 0 to enable delay control

//wait for 3us for TOPSM and delay (HW) control signal stable
for(i = 0 ; i  loop_1us*3 ; i++);

//enable and reset UPLL
reg_val = *PLL_UPLL_CON0;
reg_val |= 0x0001;
*PLL_UPLL_CON0  = reg_val; // 0xA0170140, bit 0 set to 1 to enable UPLL and generate reset of UPLL

The original code actually goes on for pages and pages, and even this snippet is surrounded by conditional statements which we culled as they were not relevant facts to initializing the PLL correctly.

With this tool added to our armory, we were finally able to code sufficient functionality to boot NuttX on our own Fernvale hardware.


Requiring users to own a Novena ROMulator to hack on Fernvale isn't a scalable solution, and thus in order to round out the story, we had to create a complete developer toolchain. Fortunately, the compiler is fairly cut-and-dry – there are many compilers that support ARM as a target, including clang and gcc. However, flashing tools for the MT6260 are much more tricky, as all the existing ones that we know of are proprietary Windows programs, and Osmocom's loader doesn't support the protocol version required by the MT6260. Thus, we had to reverse engineer the Mediatek flashing protocol and write our own open-source tool.

Fortunately, a blank, unfused MT6260 shows up as /dev/ttyUSB0 when you plug it into a Linux host – in other words, it shows up as an emulated serial device over USB. This at least takes care of the lower-level details of sending and receiving bytes to the device, leaving us with the task of reverse engineering the protocol layer. xobs located the internal boot ROM of the MT6260 and performed static code analysis, which provided a lot of insight into the protocol. He also did some static analysis on Mediatek's Flashing tool and captured live traces using a USB protocol analyzer to clarify the remaining details. Below is a summary of the commands he extracted, as used in our open version of the USB flashing tool.

enum mtk_commands {
  mtk_cmd_old_write16 = 0xa1,
  mtk_cmd_old_read16 = 0xa2,
  mtk_checksum16 = 0xa4,
  mtk_remap_before_jump_to_da = 0xa7,
  mtk_jump_to_da = 0xa8,
  mtk_send_da = 0xad,
  mtk_jump_to_maui = 0xb7,
  mtk_get_version = 0xb8,
  mtk_close_usb_and_reset = 0xb9,
  mtk_cmd_new_read16 = 0xd0,
  mtk_cmd_new_read32 = 0xd1,
  mtk_cmd_new_write16 = 0xd2,
  mtk_cmd_new_write32 = 0xd4,
  // mtk_jump_to_da = 0xd5,
  mtk_jump_to_bl = 0xd6,
  mtk_get_sec_conf = 0xd8,
  mtk_send_cert = 0xe0,
  mtk_get_me = 0xe1, /* Responds with 22 bytes */
  mtk_send_auth = 0xe2,
  mtk_sla_flow = 0xe3,
  mtk_send_root_cert = 0xe5,
  mtk_do_security = 0xfe,
  mtk_firmware_version = 0xff,

Current Status and Summary

After about a year of on-and-off effort between work on the Novena and Chibitronics campaigns, we were able to boot a port of NuttX on the MT6260. A minimal set of hardware peripherals are currently supported; it’s enough for us to roughly reproduce the functionality of an AVR used in an Arduino-like context, but not much more. We’ve presented our results this year at 31C3 (slides).

The story takes an unexpected twist right around the time we were writing our CFP proposal for 31C3. The week before submission, we became aware that Mediatek released the LinkIT ONE, based on the MT2502A, in conjunction with Seeed Studios. The LinkIT ONE is directly aimed at providing an Internet of Things platform to entrepreneurs and individuals. It’s integrated into the Arduino framework, featuring an open API that enables the full functionality of the chip, including GSM functions. However, the core OS that boots on the MT2502A in the LinkIT ONE is still the proprietary Nucleus OS and one cannot gain direct access to the hardware; they must go through the API calls provided by the Arduino shim.

Realistically, it’s going to be a while before we can port a reasonable fraction of the MT6260’s features into the open source domain, and it’s quite possible we will never be able to do a blob-free implementation of the GSM call functions, as those are controlled by a DSP unit that’s even more obscure and undocumented. Thus, given the robust functionality of the LinkIT ONE compared to Fernvale, we’ve decided to leave it as an open question to the open source community as to whether or not there is value in continuing the effort to reverse engineer the MT6260: How important is it, in practice, to have a blob-free firmware?

Regardless of the answer, we released Fernvale because we think it’s imperative to exercise our fair use rights to reverse engineer and create interoperable, open source solutions. Rights tend to atrophy and get squeezed out by competing interests if they are not vigorously exercised; for decades engineers have sat on the sidelines and seen ever more expansive patent and copyright laws shrink their latitude to learn freely and to innovate. I am saddened that the formative tinkering I did as a child is no longer a legal option for the next generation of engineers. The rise of the Shanzhai and their amazing capabilities is a wake-up call. I see it as evidence that a permissive IP environment spurs innovation, especially at the grass-roots level. If more engineers become aware of their fair use rights, and exercise them vigorously and deliberately, perhaps this can catalyze a larger and much-needed reform of the patent and copyright system.

Want to read more? Check out xobs’ post on Fernvale. Want to get involved? Chime in at our forums. Or, watch the recording of our talk below.

Team Kosagi would like to once again extend a special thanks to .mudge for making this research possible.

by bunnie at December 28, 2014 09:00 PM

December 22, 2014

Richard Hughes, ColorHug

OpenHardware : Ambient Light Sensor

My OpenHardware post about an entropy source got loads of high quality comments, huge thanks to all who chimed in. There look to be a few existing projects producing OpenHardware, and the various comments have links to the better ones. I’ll put this idea back on the shelf for another holiday-hacking session. I’ve still not given up on the SD card interface, although it looks like emulating a storage device might be the easiest and quickest route for any future project.

So, on to the next idea. An OpenHardware USB ambient light sensor. A lot of hardware doesn’t have a way of testing the ambient light level. Webcams don’t count, they use up waaaay too much power and the auto-white balence is normally hardcoded in hardware. So I was thinking of producing a very low cost mini-dongle to measure the ambient light level so that lower-spec laptops could be saving tons of power. With smartphones people are now acutely aware than up to 60% of their battery power is just making the screen light up and I’m sure we could be smarter about what we do in GNOME. The problem traditionally, has been the lack of hardware with this support.

Anyone interested?

by hughsie at December 22, 2014 05:06 PM

December 21, 2014


KR1818VG93 - FDD controller : weekend die-shot

Enthusiasts started yet another reverse engineering project of KR1818VG93 - FDD controller manufactured in USSR times. Presumably it has some compatibility/similarity with FDC1793-02 - this will be figured out.

Die size 4817x4794 µm, 6µm NMOS technology.

After stripping metal:

December 21, 2014 09:07 AM

December 19, 2014

Richard Hughes, ColorHug

OpenHardware Random Number Generator

Before I spend another night reading datasheets; would anyone be interested in an OpenHardware random number generator in an full-size SD card format? The idea being you insert the RNG into the SD slot of your laptop, leave it there, and the kernel module just slurps trusted entropy when required.

Why SD? It’s a port that a a lot of laptops already have empty, and on server class hardware you can just install a PCIe addon card. I think I can build such a thing for less than $50, but at the moment I’m just waiting for parts for a prototype so that’s really just a finger-in-the-air estimate. Are there enough free software people who care about entropy-related stuff?

by hughsie at December 19, 2014 05:06 PM

December 18, 2014

Bunnie Studios

Maker Pro: Soylent Supply Chain

A few editors have approached me about writing a book on manufacturing, but that’s a bit like asking an architect to take a photo of a building that’s still on the drawing board. The story is still unfolding; I feel as if I’m still fumbling in the dark trying to find my glasses. So, when Maker Media approached me to write a chapter for their upcoming “Maker Pro” book, I thought perhaps this was a good opportunity to make a small and manageable contribution.

The Maker Pro book is a compendium of vignettes written by 17 Makers, and you can pre-order the Maker Pro book at Amazon now.

Maker Media was kind enough to accommodate my request to license my contribution using CC BY-SA-3.0. As a result, I can share my chapter with you here. I titled it the “Soylent Supply Chain” and it’s about the importance of people and relationships when making physical goods.

Soylent Supply Chain

The convenience of modern retail and ecommerce belies the complexity of supply chains. With a few swipes on a tablet, consumers can purchase almost any household item and have it delivered the next day, without facing another human. Slick marketing videos of robots picking and packing components and CNCs milling components with robotic precision create the impression that everything behind the retail front is also just as easy as a few search queries, or a few well-worded emails. This notion is reinforced for engineers who primarily work in the domain of code; system engineers can download and build their universe from source–the FreeBSD system even implements a command known as ‘make buildworld’, which does exactly that.

The fiction of a highly automated world moving and manipulating atoms into products is pervasive. When introducing hardware startups to supply chains in practice, almost all of them remark on how much manual labor goes into supply chains. Only the very highest volume products and select portions of the supply chain are well-automated, a reality which causes many to ask me, “Can’t we do something to relieve all these laborers from such menial duty?” As menial as these duties may seem, in reality, the simplest tasks for humans are incredibly challenging for a robot. Any child can dig into a mixed box of toys and pick out a red 2×1 Lego brick, but to date, no robot exists that can perform this task as quickly or as flexibly as a human. For example, the KIVA Systems mobile-robotic fulfillment system for warehouse automation still requires humans to pick items out of self-moving shelves, and FANUC pick/pack/pal robots can deal with arbitrarily oriented goods, but only when they are homogeneous and laid out flat. The challenge of reaching into a box of random parts and producing the correct one, while being programmed via a simple voice command, is a topic of cutting-edge research.

bunnie working with a factory team. Photo credit: Andrew Huang.

The inverse of the situation is also true. A new hardware product that can be readily produced through fully automated mechanisms is, by definition, less novel than something which relies on processes not already in the canon of fully automated production processes. A laser-printed sheet will always seem more pedestrian than a piece of offset-printed, debossed, and metal-film transferred card stock. The mechanical engineering details of hardware are particularly refractory when it comes to automation; even tasks as simple as specifying colors still rely on the use of printed Pantone registries, not to mention specifying subtleties such as textures, surface finishes, and the hand-feel of buttons and knobs. Of course, any product’s production can be highly automated, but it requires a huge investment and thus must ship in volumes of millions per month to amortize the R&D cost of creating the automated assembly line.

Thus, supply chains are often made less of machines, and more of people. Because humans are an essential part of a supply chain, hardware makers looking to do something new and interesting oftentimes find that the biggest roadblock to their success isn’t money, machines, or material: it’s finding the right partners and people to implement their vision. Despite the advent of the Internet and robots, the supply chain experience is much farther away from or Target than most people would assume; it’s much closer to an open-air bazaar with thousands of vendors and no fixed prices, and in such situations getting the best price or quality for an item means building strong personal relationships with a network of vendors. When I first started out in hardware, I was ill-equipped to operate in the open-market paradigm. I grew up in a sheltered part of Midwest America, and I had always shopped at stores that had labeled prices. I was unfamiliar with bargaining. So, going to the electronics markets in Shenzhen was not only a learning experience for me technically, it also taught me a lot about negotiation and dealing with culturally different vendors. While it’s true that a lot of the goods in the market are rubbish, it’s much better to fail and learn on negotiations over a bag of LEDs for a hobby project, rather than to fail and learn on negotiations on contracts for manufacturing a core product.

One of bunnie’s projects is Novena, an open source laptop. Photo credit: Crowd Supply.

This point is often lost upon hardware startups. Very often I’m asked if it’s really necessary to go to Asia–why not just operate out of the US? Aren’t emails and conference calls good enough, or worst case, “can we hire an agent” who manages everything for us? I guess this is possible, but would you hire an agent to shop for dinner or buy clothes for you? The acquisition of material goods from markets is more than a matter of picking items from the shelf and putting them in a basket, even in developed countries with orderly markets and consumer protection laws. Judgment is required at all stages — when buying milk, perhaps you would sort through the bottles to pick the one with greatest shelf life, whereas an agent would simply grab the first bottle in sight. When buying clothes, you’ll check for fit, loose strings, and also observe other styles, trends, and discounted merchandise available on the shelf to optimize the value of your purchase. An agent operating on specific instructions will at best get you exactly what you want, but you’ll miss out better deals simply because you don’t know about them. At the end of the day, the freshness of milk or the fashion and fit of your clothes are minor details, but when producing at scale even the smallest detail is multiplied thousands, if not millions of times over.

More significant than the loss of operational intelligence, is the loss of a personal relationship with your supply chain when you surrender management to an agent or manage via emails and conference calls alone. To some extent, working with a factory is like being a houseguest. If you clean up after yourself, offer to help with the dishes, and fix things that are broken, you’ll always be welcome and receive better service the next time you stay. If you can get beyond the superficial rituals of politeness and create a deep and mutually beneficial relationship with your factory, the value to your business goes beyond money–intangibles such as punctuality, quality, and service are priceless.

I like to tell hardware startups that if the only value you can bring to a factory is money, you’re basically worthless to them–and even if you’re flush with cash from a round of financing, the factory knows as well as you do that your cash pool is finite. I’ve had folks in startups complain to me that in their previous experience at say, Apple, they would get a certain level of service, so how come we can’t get the same? The difference is that Apple has a hundred billion dollars in cash, and can pay for five-star service; their bank balance and solid sales revenue is all the top-tier contract manufacturers need to see in order to engage.

Circuit Stickers, adhesive-backed electronic components, is another of bunnie’s projects. Photo credit: Andrew “bunnie” Huang.

On the other hand, hardware startups have to hitchhike and couch-surf their way to success. As a result, it’s strongly recommended to find ways other than money to bring value to your partners, even if it’s as simple as a pleasant demeanor and an earnest smile. The same is true in any service industry, such as dining. If you can afford to eat at a three-star Michelin restaurant, you’ll always have fairy godmother service, but you’ll also have a $1,000 tab at the end of the meal. The local greasy spoon may only set you back ten bucks, but in order to get good service it helps to treat the wait staff respectfully, perhaps come at off-peak hours, and leave a good tip. Over time, the wait staff will come to recognize you and give you priority service.

At the end of the day, a supply chain is made out of people, and people aren’t always rational and sometimes make mistakes. However, people can also be inspired and taught, and will work tirelessly to achieve the goals and dreams they earnestly believe in: happiness is more than money, and happiness is something that everyone wants. For management, it’s important to sell your product to the factory, to get them to believe in your vision. For engineers, it’s important to value their effort and respect their skills; I’ve solved more difficult problems through camaraderie over beers than through PowerPoint in conference rooms. For rank-and-file workers, we try our best to design the product to minimize tedious steps, and we spend a substantial amount of effort making the tools we provide them for production and testing to be fun and engaging. Where we can’t do this, we add visual and audio cues that allow the worker to safely zone out while long and boring processes run. The secret to running an efficient hardware supply chain on a budget isn’t just knowing the cost of everything and issuing punctual and precise commands, but also understanding the people behind it and effectively reading their personalities, rewarding them with the incentives they actually desire, and guiding them to improve when they make mistakes. Your supply chain isn’t just a vendor; they are an extension of your own company.

Overall, I’ve found that 99% of the people I encounter in my supply chain are fundamentally good at heart, and have an earnest desire to do the right thing; most problems are not a result of malice, but rather incompetence, miscommunication, or cultural misalignment. Very significantly, people often live up to the expectations you place on them. If you expect them to be bad actors, even if they don’t start out that way, they have no incentive to be good if they are already paying the price of being bad — might as well commit the crime if you know you’ve been automatically judged as guilty with no recourse for innocence. Likewise, if you expect people to be good, oftentimes they will rise up and perform better simply because they don’t want to disappoint you, or more importantly, themselves. There is the 1% who are truly bad actors, and by nature they try to position themselves at the most inconvenient road blocks to your progress, but it’s important to remember that not everyone is out to get you. If you can gather a syndicate of friends large enough, even the bad actors can only do so much to harm you, because bad actors still rely upon the help of others to achieve their ends. When things go wrong your first instinct should not be “they’re screwing me, how do I screw them more,” but should be “how can we work together to improve the situation?”

In the end, building hardware is a fundamentally social exercise. Generally, most interesting and unique processes aren’t automated, and as such, you have to work with other people to develop bespoke processes and products. Furthermore, physical things are inevitably owned or operated upon by other people, and understanding how to motivate and compel them will make a difference in not only your bottom line, but also in your schedule, quality, and service level. Until we can all have Tony Stark’s JARVIS robot to intelligently and automatically handle hardware fabrication, any person contemplating manufacturing hardware at scale needs to understand not only circuits and mechanics, but also how to inspire and effectively command a network of suppliers and laborers.

After all, “it’s people — supply chains are made out of people!”

by bunnie at December 18, 2014 11:02 AM

Name that Ware December 2014

The Ware for December 2014 is shown below.

Thanks again to dmo and QB for letting me photograph this ware.

Happy holidays!

by bunnie at December 18, 2014 08:22 AM

Winner, Name that Ware November 2014

The Ware for November 2014 is a linear actuator for ulta-high vacuum environments, with a pass-through. You can actually download a spec for the ware at 真空機器・部品.com. Thanks again to dmo and QB for letting me snag a couple wares and use them for the competition.

Albert got the correct first guess about it being a linear actuator for UHV environments (but missed the pass-through part). I really like Arnuschky’s detailed explanation, and he also identified the pass-through feature, so I’ll declare him the winner. Congrats, thanks for playing!

by bunnie at December 18, 2014 08:21 AM

December 17, 2014

Richard Hughes, ColorHug

Actually shipping AppStream metadata in the repodata

For the last couple of releases Fedora has been shipping the appstream metadata in a package. First it was the gnome-software package, but this wasn’t an awesome dep for KDE applications like Apper and was a pain to keep updated. We then moved the data to an appstream-data package, but this was just as much of a hack that was slightly more palatable for KDE. What I’ve wanted for a long time is to actually ship the metadata as metadata, i.e. next to the other files like primary.xml.gz on the mirrors.

I’ve just pushed the final patches to libhif, PackageKit and appstream-glib, which that means if you ship metadata of type appstream and appstream-icons in repomd.xml then they get downloaded automatically and decompressed into the right place so that gnome-software and apper can use the data magically.

I had not worked on this much before, as appstream-builder (which actually produces the two AppStream files) wasn’t suitable for the Fedora builders for two reasons:

  • Even just processing the changed packages, it took a lot of CPU, memory, and thus time.
  • Downloading screenshots from random websites all over the internet wasn’t something that a build server can do.

So, createrepo_c and modifyrepo_c to the rescue. This is what I’m currently doing for the Utopia repo.

createrepo_c --no-database x86_64/
createrepo_c --no-database SRPMS/
modifyrepo_c					\
	--no-compress				\
	/tmp/asb-md/appstream.xml.gz		\
modifyrepo_c					\
	--no-compress				\
	/tmp/asb-md/appstream-icons.tar.gz	\

If you actually do want to create the metadata on the build server, this is what I use for Utopia:

appstream-builder			\
	--api-version=0.8		\
	--origin=utopia			\
	--cache-dir=/tmp/asb-cache	\
	--enable-hidpi			\
	--max-threads=4			\
	--min-icon-size=48		\
	--output-dir=/tmp/asb-md	\
	--packages-dir=x86_64/		\
	--temp-dir=/tmp/asb-icons	\

For Fedora, I’m going to suggest getting the data files from during compose. It’s not ideal as it still needs a separate server to build them on (currently sitting in the corner of my office) but gets us a step closer to what we want. Comments, as always, welcome.

by hughsie at December 17, 2014 08:50 PM

November 28, 2014

Video Circuits

Gieskes 3TrinsRGB1

Gieskes has come out with a lovely little closed architecture video synthesizer with a small break out bread board which opens up the whole thing to more interesting exploitation. Beautiful stuff.

by Chris ( at November 28, 2014 04:47 AM

November 27, 2014

Bunnie Studios

Name that Ware, November 2014

The Ware for November 2014 is shown below.

(No, it’s not my turkey baster. But happy Thanksgiving!)

Thanks to dmo & QB for allowing me to photograph this ware.

by bunnie at November 27, 2014 05:46 PM

Winner, Name that Ware October 2014

The Ware from October 2014 is the active element of an HP 4900A inkjet printhead. It’s a pretty neat example of a piece of silicon being used to manipulate liquids on a micro-scale to create macro-scale results.

The winner is Adrian for getting the first near-correct guess, although I really enjoyed Marcan’s detailed thoughts about the ware. Congrats, email me for your prize.

by bunnie at November 27, 2014 05:46 PM

Altus Metrum

keithp&#x27;s rocket blog: Black Friday 2014

Altus Metrum's 2014 Black Friday Event


Altus Metrum announces two special offers for "Black Friday" 2014.

We are pleased to announce that both TeleMetrum and TeleMega will be back in stock and available for shipment before the end of November. To celebrate this, any purchase of a TeleMetrum, TeleMega, or EasyMega board will include, free of charge, one each of our 160, 400, and 850 mAh Polymer Lithium Ion batteries and a free micro USB cable!

To celebrate NAR's addition of our 1.9 gram recording altimeter, MicroPeak, to the list of devices approved for use in contests and records, and help everyone get ready for NARAM 2015's altitude events, purchase 4 MicroPeak boards and we'll throw in a MicroPeak USB adapter for free!

These deals will be available from 00:00 Friday, 28 November 2014 through 23:59 Monday, 1 December, 2014. Only direct sales through our web store at are included; no other discounts apply.

Find more information on all Altus Metrum products at

Thank you for your continued support of Altus Metrum in 2014. We continue to work on more cool new products, and look forward to meeting many of you on various flight lines in 2015!

November 27, 2014 07:47 AM

November 25, 2014


#oggstreamer – Repair Series 1 – channel not working / high gain

some weeks ago I had to exchange an OggStreamer because one channel was not working – the user Mike recorded the following video to demonstrate the problem:

If you watch the video closely you can see that the left channel is always amplifying the Audio with a very high gain – and only as he turns the gain up also the right channel appears – actually it is the right channel that is working correctly and the left has an error where it has a very high gain.

Looking at the schematics reveals that a connection error to the Potentiometer (which is in the Feedback loop) could cause such a high gain of the inverting amplifier:


So a loose connector is my first guess. My second guess would be a faulty potentiometer.

Lets take the device apart:


gently push the whole assembly (top and pcb) out of the extruded alluminium case:


A quick look around and touching cables reveals the problem:


because I had a spare contact lying around i soldered on a new contact, but elsewise I could have recycled the orignial one:


Time to put everything to gether and check if it works:



both channels work now :) success.

by oggstreamer at November 25, 2014 12:26 PM

November 17, 2014


Torex XC6206 - CMOS LDO : weekend die-shot

Torex XC6206 is a popular and really tiny CMOS LDO, especially if you compare it to older bipolar ones, which were magnitude larger. 250mA LDO in SOT-23 might be hard to believe at first.

Datasheet mentions "laser trimming" but we see voltage set via mask and 2 fuses for fine tuning. It is possible though that they have common values set in mask (like this 3.3V one) , and rare voltages laser trimmed.

Die size 500x356 µm, 500nm technology.

Etching off metals:

November 17, 2014 04:49 AM

November 10, 2014

Free Electrons

Création d’un meetup Linux embarqué et Android à Toulouse

Meetup Embedded Linux & Android ToulouseUne partie de l’équipe d’ingénierie de Free Electrons se trouve à Toulouse, et c’est donc tout naturellement que nous vous faisons part de la création sur Toulouse d’un meetup régulier autour de nos thématiques préférées: le Toulouse Embedded Linux & Android meetup. Ces événements sont organisés avec le soutien de Captronic.

Deux dates sont déjà prévues:

Ces événements auront lieu à La Cantine Toulouse, et sont gratuits après inscription sur le site

by Thomas Petazzoni at November 10, 2014 06:53 AM


DIP 10Mhz Quartz oscillator based on Seiko NPC HA5022A3 : weekend die-shot

Seiko NPC HA5022A3 contains internal load capacitors, oscillator with amplitude limiting (for reduced power consumption) and optional frequency divider.
Die size 976x770 µm.

Quartz crystal is mounted on springs - in order to reduce impact of vibration on oscillation stability and to make it's damage less likely:

There is an oscillator IC soldered on the ceramic PCB, as well as 0.01uF power supply decoupling cap. It seems we need to go deeper:

November 10, 2014 02:32 AM

November 07, 2014

Free Electrons

Les formations de Free Electrons arrivent à Paris


Free Electrons organise depuis près de 10 ans des sessions de formation inter-entreprises autour de Linux embarqué, du noyau Linux et d’Android dans des villes de la moitié sud de la France: Nice, Toulouse, Avignon, Lyon. Il nous manquait évidemment une présence dans la capitale et plus généralement dans la moitié nord de la France.

Nous sommes en train d’y remédier, en organisant une première session de formation sur Linux embarqué dans la capitale, du 9 au 13 mars 2015. D’autres sessions suivront, sur les autres thématiques couvertes par nos formations. Celles-ci seront annoncées sur notre page de sessions et dates.

Les points forts de nos formations en inter-entreprises restent les mêmes:

  • Formateur ayant une solide expérience de terrain à partager, consacrant la plus grande partie de son temps à des projets de développement. Un formateur qui est un contributeur actif à la communauté de développeurs en Linux embarqué, et ayant ainsi une excellente connaissance des ressources offertes par celle-ci.
  • Supports de formation intégralement disponibles sur notre site web. Vous pouvez à l’avance vérifier que ceux-ci correspondent à vos besoins.
  • Des travaux pratiques qui testent la bonne compréhension des notions les plus importantes et qui construisent une vraie première expérience (pas de manipulations qu’on execute sans les comprendre).
  • Carte embarquée à base de processeur ARM (Atmel SAMA5D3 Xplained) incluse, pour continuer à développer prototypes et expérience au delà du temps de la formation.
  • La garantie de ne pas avoir à faire les travaux pratiques avec un autre participant, et un nombre de personnes limité à 10.
  • Une transparence totale sur les retours des anciens participants.
  • Un tarif réduit pour les inscriptions au moins deux mois à l’avance, et pour les inscriptions groupées.

Tous les détails sur notre prochaine session.

by Michael Opdenacker at November 07, 2014 04:49 PM

November 03, 2014

Video Circuits

Just Jam Barbican

Got some work in this thing, coming up soon, along with lots of other nice artists and musicians.

by Chris ( at November 03, 2014 11:20 AM

Peter Chamberlain

Peter C has uploaded some amazing work from earlier in his career.

by Chris ( at November 03, 2014 11:16 AM

Analogue Video Workshop

So the Video Workshop went really well, we are planning to do more but I also have a bunch of other related projects going on at the moment, so if you want to host one send me an email and I am sure we could work something out. I currently have a diy sync-gen on the workbench and have been generating a lot of footage, theres nothing like meeting a bunch of people and getting them enthusiastic about video art to scare you into finishing some of your own work. Thanks to all who were involved in putting the workshop on including Encounters, Seeing Sound, Arnolfini and Bath Spa University.

oh yeah and one of the attendees wrote us a very nice review, which I think is a really kind thing I should do more often when I go to talks.

Here are two shots of Alex's video synthesis lesson.

by Chris ( at November 03, 2014 11:01 AM

Sismo VGA Box

Here is what looks like a simple VGA breakout box for eurorack standard synths, pretty fun way to get it to video synthesis, you could have something similar together in an afternoon and use any audio source as an input. 

by Chris ( at November 03, 2014 10:31 AM

And sometimes you find something you really shouldn't have missed 

"Secret Cinema was an email list that provided announcements of avant-garde & artists' film and video screenings in London. Secret Cinema ran from 2001-2011, and this blog covers events from 2006 onwards. Subscribers received information about more events than are listed on this website."

by Chris ( at November 03, 2014 10:15 AM

November 02, 2014

Kristian Paul

Piaware statics

Got a cheap rasberry pi recentently. Since the httpd on it was not eating lot cpu cycles decided to run something else that will give use to my rtl-sdr as well. Is called pi aware [1]. So far got interesting ADS-B statics from my place [2]. Not bad.

As a side note use rtl-power as a passive radar could be another interesting option, just for the stats ;-).


November 02, 2014 05:00 AM

October 31, 2014


BFR93 - BJT RF transistor : weekend die-shot

BFR93 is a popular, BJT npn RF transistor.
Die size 265x264 µm. Transistor itself occupy only small part of the die - it is impractical to cut smaller die, it is already almost a silicon cube:

October 31, 2014 10:06 PM

October 30, 2014

Richard Hughes, ColorHug

appdata-tools is dead

PSA: If you’re using appdata-validate, please switch to appstream-util validate from the appstream-glib project. If you’re also using the M4 macro, just replace APPDATA_XML with APPSTREAM_XML. I’ll ship both the old binary and the old m4 file in appstream-glib for a little bit, but I’ll probably remove them again the next time we bump ABI. That is all. :)

by hughsie at October 30, 2014 02:53 PM

October 27, 2014


10Mhz Quartz SMD oscillator based on Seiko NPC SM5009 : weekend die-shot

Seiko NPC SM5009 contains internal load capacitors, oscillator with amplitude limiting (for reduced power consumption) and optional frequency divider.

Die size 1194x897 µm, 800nm technology.

October 27, 2014 04:59 AM

October 20, 2014


OnSemi MMBT2222A - npn BJT transistor : weekend die-shot

Die size 343x343 µm. Comparing to NXP BC847B die area is 1.5x larger (0.118 vs 0.076mm²), but maximum continuous collector current is 6 times higher (600mA vs 100mA, SOT-23 in both cases). This huge increase in current per transistor area is achieved by shunting thin (=high-resistance) base layer with metal. High resistance of base layer is the limiting factor for maximum collector current in BC847B.

October 20, 2014 12:42 PM

NXP 2N7002 N-channel MOSFET : weekend die-shot

Die size 377x377 µm.

Hexagonal cells of TrenchMOS transistor has 4µm size.

October 20, 2014 06:01 AM

October 19, 2014


Espressif ESP8266 WiFi-serial interface : weekend die-shot

Since August of 2014 internet is literally blown by WiFi-serial modules on new ESP8266 chip which are currently being sold for less than 4$. Chinese company Espressif managed to cram entire WiFi, TCP/IP and HTTP stack into on-chip memory, without external DRAM. Analog front-end requires minimal external components, all filters are internal. All this allowed them to offer extremely aggressive price. Chip has marking ESP8089, which is their more advanced 40nm product. Apparently, they only differ in bonding and ROM content.

Die size 2050x2169 µm, half of which is occupied by transceiver and PA, 25% - on-chip memory (rough size estimations are ~300KiB), and the rest is Xtensa LX106 CPU core and other digital logic.

Chinese engineers did an outstanding job here on finally making WiFi IoT devices cost effective. Let's hope Espressif will eventually open more internal chip information for amateurs and end users.

October 19, 2014 08:32 PM

October 18, 2014

Bunnie Studios

Name that Ware October 2014

The Ware for October 2014 is shown below.

Very busy with getting Novena ready for shipping, and Chibitronics is ramping into full holiday season production. And then this darn thing breaks! Well, at least I got pictures to share.

Have fun!

by bunnie at October 18, 2014 11:32 AM

Winner, Name that Ware September 2014

The Ware from September 2014 was a Totalphase Beagle USB 480. Gratz to Nick Ames for having the first correct guess, email me for your prize! Unfortunately, none of the claims on the FPGA identification were convincing enough for me to accept them without having to do a lot of legwork of my own to verify.

by bunnie at October 18, 2014 11:31 AM

October 15, 2014

Richard Hughes, ColorHug

GNOME Software and Fonts

A few people have asked me now “How do I make my font show up in GNOME Software” and until today my answer has been something along the lines of “mrrr, it’s complicated“.

What we used to do is treat each font file in a package as an application, and then try to merge them together using some metrics found in the font and 444 semi-automatically generated AppData files from a manually updated .csv file. This wasn’t ideal as fonts were being renamed, added and removed, which quickly made the .csv file obsolete. The summary and descriptions were not translated and hard to modify. We used the pre-0.6 format AppData files as the MetaInfo specification had not existed when this stuff was hacked up just in time for Fedora 20.

I’ve spent the better part of today making this a lot more sane, but in the process I’m going to need a bit of help from packagers in Fedora, and maybe even helpful upstreams. This are the notes of what I’ve got so far:

Font components are supersets of font faces, so we’d include fonts together that make a cohesive set, for instance,”SourceCode” would consist of “SoureCodePro“, “SourceSansPro-Regular” and “SourceSansPro-ExtraLight“. This is so the user can press one button and get a set of fonts, rather than having to install something new when they’re in the application designing something. Font components need a one line summary for GNOME Software and optionally a long description. The icon and screenshots are automatically generated.

So, what do you need to do if you maintain a package with a single font, or where all the fonts are shipped in the same (sub)package? Simply ship a file like this in /usr/share/appdata/Liberation.metainfo.xml like this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">
  <summary>Open source versions of several commercial fonts</summary>
      The Liberation Fonts are intended to be replacements for Times New Roman,
      Arial, and Courier New.
  <url type="homepage"></url>

There can be up to 3 paragraphs of description, and the summary has to be just one line. Try to avoid too much technical content here, this is designed to be shown to end-users who probably don’t know what TTF means or what MSCoreFonts are.

It’s a little more tricky when there are multiple source tarballs for a font component, or when the font is split up into subpackages by a packager. In this case, each subpackage needs to ship something like this into /usr/share/appdata/LiberationSerif.metainfo.xml:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">

This won’t end up in the final metadata (or be visible) in the software center, but it will tell the metadata extractor that LiberationSerif should be merged into the Liberation component. All the automatically generated screenshots will be moved to the right place too.

Moving the metadata to font packages makes the process much more transparent, letting packagers write their own descriptions and actually influence how things show up in the software center. I’m happy to push some of my existing content from the .csv file upstream.

These MetaInfo files are not supposed to replace the existing fontconfig files, nor do I think they should be merged into one file or format. If your package just contains one font used internally, or where there is only partial coverage of the alphabet, I don’t think we want to show this in GNOME Software, and thus it doesn’t need any new MetaInfo files.

by hughsie at October 15, 2014 01:48 PM

October 08, 2014

Video Circuits

Why Not, Jim Sosnin (1980)

A recent upload by Jim Sosnin drawn to my attention by the ever vigilant Jeffrey

"Video synthesis demo from 1980, realised using EMS Spectron video synth plus some homebrew gear. The audio was created in 1978 using 3 Transaudio synths linked together. This digital transfer via an old U-matic (3/4-inch format) VCR, repaired for the occasion, to retrieve original stereo audio (my more recent VHS copy had mono audio only)."

by Chris ( at October 08, 2014 06:44 AM

October 06, 2014


Analog Devices AD558 - MIL-Spec 8-bit I²L DAC : weekend die-shot

Analog Devices AD558 is an 8-bit I²L DAC in ceramic package (MIL spec).

It is still an open question how this chip got into ex-USSR/Russia - anonymous reader left no comments on that (this smells like cold war...). It is not a secret that Russia had no extensive civilian IC assortment in manufacturing, hence all military IC's must have been designed and manufactured from scratch (i.e. all R&D, prototypes and masks must be paid by government). In such conditions providing all variety of domestic ICs is economically impossible, at least without government expenses comparable to whole world's expenses on IC R&D. So "temporary", "case-by-case" permit to use imported (both legitimately and not-so-legitimately) western ICs in military equipment "until domestic products are ready" is still here after 24 years despite numerous attempts to end this practice.

Die size 2713x2141 µm, 6µm manufacturing technology, trimming laser was leaving ~8µm diameter spots.

Oh, these rounded resistors are just beautiful... Autorouters in 2014, do you see this?
Note how amount of laser trimming on R-ladder is different for different bits.

PS. Could anyone share position of western engineers on plastic-vs-ceramic package for military/space-grade IC's? It appears modern plastic packages offer more benefits (like better G-shock/vibration reliability and obviously cost) without sacrificing anything (temperature range and moisture are less of a concern now, radiation was never a concern for a package).

October 06, 2014 09:52 PM

October 03, 2014

Bunnie Studios

Novena Update

It’s been four months since we finished Novena’s crowd funding campaign, and we’ve made a lot of progress since then. Since then, a team of people have been hard at work to make Novena a reality.

It takes many hands to build a product of this complexity, and we couldn’t do it without our dedicated and hard-working team at AQS. Above is a photo from the conference room where we did the T1 plastics review in Dongguan, China.

In this update, we’ll be discussing progress on the Casing, Electronics, Accessories, Firmware and the Community.

Case construction update
We’re very excited that the Novena cases we’re carrying around are now made of entirely production-process hardware — no more prototypes. A total of 10 injection molding tools, many of the family molds, have been opened so far; for comparison, a product like NeTV or chumby had perhaps 3-4 tools.

For those not familiar with injection molding, it’s a process whereby plastic is molded into a net shape from hot, high pressure liquid plastic forced into a cavity made out of hardened steel. The steel tool is a masterpiece of engineering in itself – it’s a water-cooled block weighing in at about a ton, capable of handling pressures found at the bottom of the Mariana Trench, and the internal surfaces are machined to tolerances better than the width of a human hair. And on top of that, it contains a clockwork of moving pieces, with dozens of ejector pins, sliders, lifters and parting surfaces coming apart and back together again smoothly over thousands of cycles. It’s amazing that these tools can be crafted in a couple of months, yet here we are.

With so much complexity involved, it’s no surprise that the tools require several iterations of refinement to get absolutely perfect. In tooling jargon, the iterations are referred to as T0, T1, T2…etc. You’re doing pretty good if you can go to full production at T2; we’re currently at T1 stage. The T1 plastics are 99% there, with a few issues relating to flow and knit lines, as well as a couple of spots where the plastic is warping during cooling or binding to the tool during ejection and causing some deformation. This manifests itself in a couple spots where the seams aren’t as tight as we’d like them to be in the case.

Most people have only seen products of finished tooling, so I thought I’d share what a pretty typical T0 shot looks like, particularly for a large and complex tool like the Novena case base part. Test shots like this are typically done in colors that highlight defects and/or the resin is available as scrap, hence the gray color. The final units will be black.

There’s a lot going on with this piece of plastic, so below is a visual guide to some of the artifacts.

In the green boxes are a set of “sink marks”. These happen when the opposite side of the plastic has a particularly thin or thick feature. These areas will cool faster or slower than the bulk of the plastic, causing these regions to pucker slightly and cause what looks like a bit of a shadow. It’s particularly noticeable on mirror-finish parts. In this case, the sink marks are due to the plastic underneath the nut bosses of the Peek array being much thinner than the surrounding plastic. The fix to this problem was to slightly thicken that region, reducing the overall internal clearance of the case by 0.8mm. Fortunately, I had designed in a little extra clearance margin to the case so this was possible.

The red arrow points to a “knit line”. This is a region where plastic flow meets within the tool. Plastic, as it is injected into the cavity, will tend to flow from one or more gates, and where the molten plastic meets itself, it will leave a hairline scar. It’s often located at points of symmetry between the gates where the plastic is injected (on this tool, there are four gates located underneath the spot where the rubber feet go — gates are considered cosmetically unattractive and thus they are strategically placed to hide their location).

The white feathery artifacts, as indicated by the orange arrow, are flow marks. In this case, it seems plastic was cooling a bit too quickly within the tool, causing these streaks. This problem can often be fixed by adjusting the injection pressure, cycle length, and temperature. This tweaking is done using test shots on the molding machine, with one parameter at a time tweaked, shot after shot, until its optimum point is found. This process can sometimes take hundreds of shots, creating a small hill of scrap plastic as a by-product.

Most of these gross defects were fixed by T1, and the plastic now looks much closer to production-grade (and the color is now black). Below is the T1 shot in initial testing after transferring live hardware into the plastics.

There’s still a few issues around fit and finish. The rear lip is binding to the tool slightly during ejection, which is causing a little bit of deformation. Also, the panel we added at the last minute to accommodate oversized expansion boards isn’t mating as tightly as we’d like it to. But, despite all of these issues, the case feels much more solid than the prototypes, and the gas piston mechanism is finally consistent and really smooth.

Front bezel update
The front bezel of Novena’s case (not to be confused with the aluminum LCD bezel) has gone through a couple of changes since the campaign. When we closed funding, it had two outward-facing USB ports and one switch. Now, it has two switches and one outward-facing USB port and one inward-facing USB port.

One switch is for power — it goes directly to the power board and thus can be used to turn the system on and off even when the main board is fully powered down.

The other switch is wired to a user key press, and the intent is to facilitate Bluetooth association for keyboards that are being stupid. It seems some keyboards can take up to a half-minute to cycle through — something (presumably, it’s trying to be secure) — before they connect. There are hacks you can do to bypass that, but it requires you to run a script on the host, and the idea is by pressing this button users can trigger a convenience script to get past the utter folly of Bluetooth. This switch also doubles as a wake-up button for when the system is in suspend.

As for the USB ports, there are still four ports total in the design, but the configuration is now as follows:

  • Two higher-current capable ports on the right
  • One standard-current capable port on the front
  • One standard-current capable port facing toward the Peek Array
  • In other words, we face one USB port toward the inside of the machine; since half the fun of Novena is modding the hardware, we figure making a USB port available on the inside is at least as useful as making it available on the outside.

    For those who don’t do hardware mods, it’s also a fine place to plug small dongles that you generally keep permanently attached, such as a radio transceiver for your keyboard. It’s a little inconvenient to initially plug in the dongle, but keeping the radio transceiver dongle facing the inside helps protect it from damage when you throw your laptop into your travel bag.

    We toyed with several iterations of speaker selection for Novena. One of the core ideas behind the design was to make speaker choice something every user would be encouraged to make on their own. One driving reason for this is some people really listen to music on their laptop when they travel, but others simply rely upon the speaker for notification tones and would prefer to use headphones for media capabilities.

    Physics dictates that high-quality sound requires a certain amount of space and mass, and so users who have a more relaxed fidelity requirement should be able to reclaim the space and weight that nicer speakers would require.

    Kurt Mottweiler, the designer of the Heirloom model, had selected a nice but very compact off-the-shelf speaker, the PUI ASE06008MR-LW150-R, for the Heirloom. We evaluated that in the context of the standard Novena model and found that it fit well into the Peek Array and it also had acceptable fidelity, particularly for its size. And so, we adopted this as the standard offering for audio. However, it will be provided with a mounting kit that allows for easy removal so users who need to reclaim the space they take, or who want to go the other way and put in larger speakers, can do so with ease.

    PVT2 Mainboard
    The Novena mainboard went through a minor revision prior to mass production. The 21-point change list can be viewed here; the majority of the changes focused on replacing or updating components that were at risk of EOL. The two most significant changes from a design standpoint were the addition of an internal FPC header to connect to the front bezel cluster, and a dedicated hardware RTC module.

    The internal FPC header was added to improve the routing of signals from the mainboard to the front bezel cluster. We had to run two USB ports, plus a smattering of GPIOs and power to the front bezel and the original scheme required multiple cables to execute the connection. The updated design condenses all of this into a single FPC, thereby simplifying the design and improving reliability.

    A dedicated hardware RTC module was added because we couldn’t get the RTC built into the i.MX6 to perform well. It seems that the CPU simply had a higher leakage on the RTC than reported in the datasheet, and thus the lifetime of the RTC when the system was turned off was measured in, at best, minutes. We made the call that there was too much risk in continuing to develop with the on-board RTC and opted to include an external, dedicated RTC module that we knew would work. In order to increase compatibility with other i.MX6 platforms, we picked the same module used by the Solid-Run Hummingboard, the NXP PCF8523T/1.

    The GPBB got a face-lift and a couple of small mods to make it more hacker-friendly.

    I think everything looks a little bit nicer in matte black, so where it doesn’t compromise production integrity we opted to use a matte black soldermask with gold finish.

    Beyond the obvious cosmetic change, the GPBB also features an adjustable I/O voltage for the digital outputs. The design change is still going through testing, but the concept is to by default allow a 5V/3.3V selectable setting in software. However, the lower voltage can also be adjusted to 2.5V and 1.8V by changing a single resistor (R12), which I also labelled “I/O VOLTAGE SET” and made a 1206 part so soldering novices can make the change themselves.

    In our experience, we’re finding an ever-increasing gulf between the voltage standards used by hobbyists and what we’re actually finding inside equipment we need to reverse engineer; and thus, to accommodate both applications a flexible voltage output selection mechanism was added to the GPBB.

    Desktop Passthrough
    The desktop case originally included just the Novena mainboard, and the front panel breakout. It turns out this makes power management awkward, as the overall power management system for the case was designed with the assumption there is a helper microcontroller managing a master cut-off switch.

    Complexity is the devil, and it’s been hard enough to get the software going for even a single configuration. So in net we found it would be cheaper to introduce a new piece of hardware rather than deal with multiple code configurations.

    Therefore, desktop systems are now getting a power pass-through board as part of the offering. It’s a simple PCBA that contains just the STM32 controller and power switch of the full Senoko board. This allows us to use a consistent gross power management architecture across both the desktop and the laptop systems.

    Of course, this is swatting a fly with a sledgehammer, but this sledgehammer costs as much as the flyswatter and it’s inconvenient to carry both a fly swatter and a sledgehammer around. And so yes, we’re using a 32-bit ARM CPU to read the state of a pushbutton and flip a GPIO, and yes, this is all done using a full multi-threaded real time operating system (ChibiOS) running underneath it. It feels a little silly, which is why we broke out some of the unused GPIOs so there’s a chance some clever user might find an application for all that untapped power.

    The battery pack for Novena is and will continue to be a wildcard in the stack. It’s our first time building a system with such a high-capacity battery, and working through all the shipping regulations to get these delivered to your front door will be a challenge.

    Some countries are particularly difficult in terms of their regulations around the importation of lithium batteries. In the worst case, we’ll send your laptop with no battery inside, and we will ship separately, at our cost, an off-the-shelf battery pack from a vendor that specializes in RC battery packs (e.g. Hobby King). You will have the same battery we featured in the crowd funding campaign, but you’ll need to plug it in yourself. We consider this to be a safe fall-back solution, since Hobby King ships thousands of battery packs a day all around the world.

    However, this did not stop us from developing a custom battery pack. As it’s very difficult to maintain a standing stock of battery packs (they need to be periodically conditioned), we’re including this custom battery pack only to backers of the campaign, providing their country of residence allows its import (and we won’t know for sure until we try). We did get UN38.3 certification for the custom battery pack, which in theory allows it to be shipped by air freight, but regulations around this are in flux. It seems countries and carriers keep on inventing new rules, particularly with all the paranoia about the potential use of lithium batteries as incendiary devices, and we don’t have the resources to keep up with the zeitgeist.

    For those who live in countries that allow the importation of our custom pack, the new pack features a 5000mAh rated capacity (about 2x the capacity over the pack we featured in the crowd campaign, which had 3000mAh printed on the outside but actually delivered about 2500mAh in practice). In real-life testing, the custom pack is getting about 6-7 hours of runtime with minimal power management enabled. Also, since I got to specify the battery, I know this one has the correct protection circuitry built into it, and I know the provenance of its cells and so I have a little more confidence in its long-term performance and stability.

    Of course, it’s a whole different matter convincing the lawmakers, customs authorities, and regulatory authorities of those facts…but fear not, even if they won’t accept this custom limited-edition battery, you will still get the original off-the-shelf pack promised in the campaign.

    Hard Drive
    In the campaign, we referenced providing 240GiB Intel 530 (or equivalent) and 480GiB Intel 720 drives for the laptop and heirloom models, respectively. We left the spec slightly ambiguous because the SSD market moves quickly, and probably the best drive last February when we drew up the spec will be different from the best drive we could get in October, when we actually do the purchasing.

    After doing some research, it’s our belief that the best equivalent drives today are the 240GiB Samsung 840 EVO (for the laptop model) and the 512GiB Samsung 850 Pro (for the Heirloom). We’ve been personally using the 840 EVO in our units for several months now, and they have performed admirably. An important metric for us is how well the drives hold up under unexpected power outages — this happens fairly often, for example, when you’re doing development work on the power management subsystem. Some hard drives, such as the SanDisk Extreme II, fail quite reliably (how’s that for an oxymoron) after a few unexpected power-down cycles. We’ve also had bad luck with OCZ and Crucial drives in the past.

    Intel drives have generally been pretty good, except that Intel stopped doing their own controllers for the 520 and 530 series and instead started using SandForce controllers, which in my opinion removes any potential advantage they could have being both the maker of the memory chips and the maker of the controller. The details of how flash memory performs, degrades, and yields are extremely process-specific, and at least in my fantasy world a company that produces flash + controller combinations should have an advantage over companies that have to mix-and-match multiple flash types with a semi-generic controller. Furthermore, while the Intel 720 does use their home-grown controller solution, it’s a power hog (over 5W active power) and requires a 12V rail, and is thus not suitable for use in laptop environments.

    The 840 EVO series comes with a reasonable 3-year warranty and at it’s held up well against one site’s write endurance test. After using mine for several months, I’ve had no complaints about it, and I think it’s a solid every-day use drive for firmware development. We also have a web server that hosts most of the media content for this and a couple other blogs, wikis, and bug tracking tools, and it’s a Novena running off an 840 EVO.

    For the premium Heirloom users, we’re very excited to build in the 850 PRO series. This drive comes with a serious warranty that matches the “heirloom” name — 10 years. The reason behind their ability to offer such a high claim of reliability is even more remarkable. The drive uses a technology that Samsung has branded “V-NAND”, which I consider to be the first bona-fide production-grade 3D transistor technology. Intel claims they make 3D transistors, but that’s just marketing hype — yes, the gate region has a raised surface topology, but you still only get a single layer of devices. From a design standpoint you’re still working with a 2D graph of devices. It’s like calling Braille a revolutionary 3D printing technology. The should have stuck with what I consider to be the “original” (and more descriptive/less misleading) name, FinFET, because by calling these 3D transistors I don’t know what they’re going to call actual 3D arrays of transistors, if they ever get around to making them.

    Chipworks did an excellent initial analysis of Samsung’s V-NAND technology and you can see from this SEM image they published that V-NAND isn’t about stacking just a couple transistors, Samsung is shipping a full-on 38-layer sandwich:

    This isn’t some lame Intel-style bra-padding exercise. This is full-on process technology bad-assery at its finest. This is Neo decoding the Matrix. This is Mal shooting first. It’s a stack of almost 40 individual, active transistors in a single spot. It’s a game changer, and it’s not vapor ware. Heirloom backers will get a laptop with over 4 trillion of these transistors packed inside, and it will be awesome.

    Sorry, I get excited about these kinds of things.

    From the software side, we’re working on finalizing the kernel, bootloader, and distro selection, as well as deciding what you’ll see when you first power on Novena.

    Marek Vasut is working on getting Novena supported in mainline U-Boot, which involves a surprising number of patches. Few ARM boards support as much RAM as Novena, so some support patches were needed first. Full support is in progress, including USB and video.

    We intend to ship with a mainline kernel, but interestingly Jon Nettleson has a 3.14 long-term-support kernel that is a hybrid of Freescale’s chip-specific patches combined with many backported upstream patches. Users may be interested in using this kernel over the upstream one, which has better support for thermal events and for power management.

    While we prefer to go with an upstream kernel, and to get our changes pushed into mainline, other users might find this kernel’s interesting blend of community and vendor code to satisfy their needs better.

    The kernel that we’ll use has most of the important parts upstreamed, including the audio chip which should be part of the 3.17 kernel. We’re still carrying a few local patches for various reasons ranging from specialized hacks to experimental features, or features that are not yet ready to push upstream, or rely on other features that are not yet upstream.

    For example, the display system on a laptop is very different from what is usually found on an ARM device, and we have local patches to fix this up. In most ARM devices, the screen is fixed during boot and it isn’t possible to hot-swap displays at runtime. Novena supports two different displays at once, and allows you to plug in an HDMI monitor without needing to reboot.

    Speaking of displays, the community has been hard at work on an accelerated 2D Xorg DDX driver. 2D acceleration is important, because most of the time users are interacting with the desktop, and 2D hardware uses significantly less power than 3D hardware. On a desktop machine, the 3D chip is used to composite the desktop. On Novena, which doesn’t have a fan and a small overall active power footprint, saving power is very important. By taking advantage of the 2D-only hardware, we save power while having a smoother experience. There are a few bugs that remain with the 2D driver, but it should be ready by the time we ship.

    There is a 3D driver that is in progress as well. It’s able to run Quake 3 on the framebuffer, but still has to be integrated into an OpenGL ES driver before it works under X.

    We’ve also been working on getting a root filesystem setup. This includes deciding which packages are installed, and customizing the list of software repositories. We want to add a repository for our kernel and bootloader, as well as for various packages which haven’t made it upstream such as an imx6 version of irqbalance. This will allow us to provide you with updated kernels as we add more support.

    Finally, the question remains of what you’ll see when you first power it up. In Linux, it’s not at all common to have a first-boot setup screen where you create your user, set the time, and configure the network. That’s common in Windows and OS X, which come preinstalled, but under Linux that’s generally taken care of by the installer. As we mull the topic, we’re torn between creating a good desktop-style experience vs. making a practical embedded developer’s experience. A desktop-style experience would ship a blank-slate and prompt the user to create an account via a locally attached keyboard and monitor; however, embedded developers may never plug a monitor into their device, and instead prefer to connect via console or ssh, thereby requiring a default username, password and hostname. Either way, we want to create just a single firmware common across all platforms, and so special-casing releases to a particular target is the least desired solution. If you have an opinion, please share it in our user forum.

    We’re pleased to see that even before shipping, we have a few alpha developers who continue to be very active. In addition to Jon Nettleton (gfx), Russell King (also gfx), and Marek Vasut (u-boot), we have a couple of other alpha user’s efforts we’d like to highlight in this update.

    MyriadRF continues to move forward with their SDR solution for Novena. About three weeks ago they sent us pre-production boards, and they are looking good. We’ve placed a binding order for their boards, and things look on track to get them into our shop by November, in time for integration with the first desktop units we’ll be shipping. MyriadRF is working on a fun demo for their hardware, but I’ll save that story for them to tell :)

    The CrypTech group has also been developing applications with the help of Novena. The CrypTech project is developing a BSD / CC BY-SA 3.0 licensed reference design and prototype examples of a Hardware Security Module. Their hope is to create a widely reviewed, designed-for-crypto device that anyone can compose for their application and easily build with their own trusted supply chain. They are using Novena to prototype elements of their design.

    The expansion board highlighted above is a prototype noise source based on avalanche noise from the transistor that can be seen on the middle of the board. CrypTech uses that noise to generate entropy in the FPGA. The entropy is then combined with entropy generated by ring oscillators in the FPGA and mixed using e.g. SHA-512 to generate seeds. The seeds are then used to initialize the ChaCha stream cipher, ultimately resulting in a stream of cryptographically sound random values. The result is a high performance, state-of-the art random number generator coprocessor. This of course represents just a first draft; since the implementation is done in an FPGA, the CrypTech team will continue to evolve their methodology and experiment with alternative methods to generate a robust stream of random numbers.

    Thanks to the CrypTech team for sharing a sneak-peek of their baby!

    Looking Forward

    From our current progress, it seems we’re still largely on track to release an initial shipment of bare boards to early backers in late November, and have an initial shipment of desktop units ready to go by late December. We’ll be shipping the units in tranches, so some backers will receive units before others.

    Our shipping algorithm is roughly a combination of how early someone backed the campaign, modified by which region of the world you’re in. As every country has different customs issues, we will probably ship just one or two items to each unique country first to uncover any customs or regulatory problems, before attempting to ship in bulk. This means backers outside the United States (where Crowd Supply’s fulfillment center is located) will be receiving their units a bit later than those within the US.

    And as a final note, if there’s one thing we’ve learned in the hardware business, is that you can’t count your chickens before they’ve hatched. Good progress to date doesn’t mean we’ve got an easy path to finished units. We still have a lot of hills to climb and rivers to cross, but at least for now we seem to be on track.

    Thanks again to all of our Novena backers, we’re looking forward to getting hardware into your hands soon!

    -bunnie & xobs

    by bunnie at October 03, 2014 06:15 PM

    October 01, 2014

    Free Electrons

    Actus trimestrielles: janvier 2013

    Cet article a été publié sur notre bulletin trimestriel d’information.

    Toute l’équipe de Free Electrons vous souhaite une bonne année 2013, avec tout le succès que vous attendez dans vos projets personnels et professionnels, et dans vos contributions à la vie d’autrui. Nous saisissons cette occasion pour vous donner des nouvelles de Free Electrons.

    En 2012, Free Electrons a continué à travailler sur des projets de développement. La principale différence avec 2011 est que les projets ont été bien plus longs. En voici les plus importants:

    • Développement de code pour le noyau Linux, pour prendre en charge les processeurs Armada 370 et Armada XP de Marvell dans le noyau Linux officiel. Cela représente des mois d’ingénierie ! Les modifications que nous avons apportées apparaissent sur
    • Développement de code pour le noyau Linux et mise en place d’un environnement de développement pour un nouveau système à base d’i.MX28 conçu par Crystalfontz, en ajoutant le support de cette carte à la version officielle de Linux. Vous trouverez plus de détails sur la page du projet sur Kickstarter !
    • Mise en place d’un système de compilation, développement de code de chargeur de démarrage et de pilotes noyau, amélioration du mécanisme de mise à jour, et plus généralement travail de développement de système Linux embarqué.
    • Développement de code noyau pour les convertisseurs analogique-numérique des processeurs AT91 d’Atmel, et inclusion dans les sources officielles du noyau.
    • Réduction du temps de démarrage et audit de gestion de l’énergie pour un terminal de paiement à base de processeur MIPS.
    • Réduction du temps de démarrage sur une plateforme de développement à base de processeur ARM pour terminal de paiement.
    • Développement, intégration, et support d’un système Linux embarqué

    Que ce soit à travers des contrats ou des contributions directes, 2012 nous a donné de nombreuses occasions de contribuer à des projets open-source, en particulier:

    • 195 patches inclus dans le noyau Linux, sans compter ceux qui ont été acceptés par les mainteneurs mais n’apparaissent pas encore dans la version de Linus Torvalds. Voir pour plus de détails.
    • 448 patches inclus dans le système compilation Buildroot: détails sur
    • 9 patches inclus dans le chargeur de démarrage U-boot.
    • 7 patches pour le chargeur de démarrage Barebox: détails sur

    Au passage, voici la commande git que vous pouvez lancer dans les dépôts correspondants pour obtenir ces mesures par vous-mêmes:

    git shortlog --no-merges -sn --author "" --since="01/01/2012" --until="12/31/2012"

    Nous avons également donné de multiples sessions de nos formations Linux embarqué et développement de pilotes de périphériques noyau Linux. Nous avons également fini de migrer nos supports de formation du format Open Document vers LaTeX, et leurs sources sont maintenant disponibles sur notre dépôt git public. Cela devient beaucoup plus simple de suivre les changements et de soumettre des contributions.

    Nous avons également créé une nouvelle formation sur le développement système avec Android, et avons donné de multiples sessions chez nos clients ainsi qu’en session inter-entreprises à Toulouse. Il s’agit d’un programme de quatre jours, pour comprendre l’architecture du système Android, comment compiler et personnaliser le système pour une plateforme matérielle particulière, et comment l’étendre pour prendre en charge de nouveaux périphériques.

    Tout comme l’an passé, nous avons partagé notre expérience lors de conférences techniques internationales:

    En participant à ces conférences, nous avons également enregistré et publié des vidéos des présentations:

    Grâce à leurs contributions au noyau Linux official sur la plateforme ARM, Grégory Clément et Thomas Petazzoni ont également été invités au minisummit ARM au Linux Kernel Summit à San Jose en août. Ils ont été impliqués dans les décisions techniques sur les prochaines évolutions du noyau Linux sur l’architecture ARM.

    Nous avons aussi organisé et à participé à deux événements « Buildroot developer days », un à Bruxelles après le FOSDEM, et le deuxième à Barcelone après ELC Europe.

    Nous avons également continué à participer au développement de la communauté de Linaro, une société d’ingénierie à but non lucratif dont le but est l’amélioration de Linux sur la plateforme ARM. Cet engagement est arrivé à son terme, et ceci permet à Michael Opdenacker de reprendre des projets plus techniques.

    Il est maintenant temps de partager nos projets pour 2013.

    Nous avons prévu de recruter de nouveaux ingénieurs pour satisfaire une demande toujours croissante pour nos services de développement et de formation. En particulier, un nouvel ingénieur nous rejoint en mars.

    Nous organisons également de nouvelles sessions de formation inter-entreprises en France, dont les dates sont maintenant disponibles:

    Nous prévoyons également d’annoncer plusieurs nouvelles formations. Étant très pris par des projets en 2012, nous n’avons pas eu le temps d’avancer dans les objectifs que nous avions annoncés il y a un an:

    • Formation Git. Une formation de deux jours pour bien maîtriser l’utilisation du système de gestion de sources distribué Git, que ce soit pour des projets internes ou pour contribuer à des projets open-source.
    • Formation sur le débogage, le traçage et l’analyse de performance sur le noyau Linux. Une session d’une ou deux journées pour tracer l’exécution des fonctions du noyau, et pouvoir rechercher les causes de dysfonctionnements et de problèmes de performance.
    • Formation sur la réduction du temps de démarrage. Une formation d’une ou deux journées pour apprendre et maîtriser la méthodologie et les techniques pour faire démarrer plus vite vos systèmes Linux embarqué.

    Comme nous ne sommes qu’aux premières étapes de la préparation de ces formations, n’hésitez pas à saisir l’occasion de nous contacter et de nous faire part de vos attentes, pour influer sur leur contenu final, au cas où vous seriez intéressés par de telles formations.

    Nous continuerons également à participer aux conférences techniques les plus importantes. En particulier, les ingénieurs de Free Electrons seront présents à l’Android Builders Summit et à l’Embedded Linux Conference à San Francisco, ainsi qu’à l’Embedded Linux Conference Europe à Edinbourg en octobre. Cette participation aux conférences permet à nos ingénieurs de rester au courant des derniers développements dans le domaine de Linux embarqué et de créer des contacts utiles dans la communauté. N’hésitez pas à vous rendre ces conférences, pour développer vos connaissances techniques et pourquoi pas en profiter pour nous rencontrer !

    Enfin, nous ferons aussi plus d’efforts pour publier ce bulletin vraiment chaque trimestre. En 2012, nous étions si occupés par nos projets que nous n’avons pas réussi à publier de bulletins pour les troisième et quatrième trimestres.

    Vous pouvez continuer à suivre les actualités de Free Electrons en lisant notre blog en anglais (31 articles en 2012), nos actualités francophones et en suivant nos nouvelles brèves sur Twitter.

    Une fois de plus, bonne année 2013 !

    L’équipe de Free Electrons.

    by Michael Opdenacker at October 01, 2014 03:14 AM

    September 29, 2014

    Village Telco

    SECN 2.0 Final Released

    SECN 2.0It’s been a while coming but we’re happy to announce the general release of the SECN 2.0 firmware.  This firmware is available for the Mesh Potato 2.0 and a range of TP-Link and Ubiquiti devices.  We posted details in the RC1 release of the software but here is a comprehensive list of features:

    • OpenWrt Attitude Adjustment:  SECN 2.0 is based on the final release of OpenWrt Attitude Adjustment.  We will continue to tie SECN releases as closely as possible to OpenWrt releases in order to maximise device compatibility.
    • Batman-adv:  The SECN firmware now runs the 2013.4 release of batman-adv which includes essential features such as Bridge Loop Avoidance.
    • WAN Support:  SECN 2.0 now offers WAN features that allow the device to configure an upstream connection via WiFi, USB Modem or Mesh
    • Configurable Ethernet:  Ethernet ports can be individually configured for WAN or LAN function.
    • Timezone setting
    • WiFi Country Code setting
    • Web page for Firmware Upgrade

    The SECN 2.0 firmware can be downloaded at  Please check all downloaded files against their MD5 sums prior to flashing your device.  If you have any questions about upgrading your firmware, please don’t hesitate to ask questions in the development community.

    Also available very soon will be an SECN 2.0 firmware for the MP1 which will allow full compatibility among first generation Mesh Potatoes and all current generation devices including the MP2 Basic, MP2 Phone, and TP-Link/Ubiquiti devices.

    This final release of the 2.0 SECN firmware wouldn’t have been possible without countless hours of tweaking, testing and innovation by Terry Gillett.  Thanks too to Keith Williamson and Elektra for invaluable support.

    Upcoming Firmware

    SECN 2.1
    Firmware for the MP2 Phone is currently in alpha development.  The 2.1 release of the SECN firmware will be the first release to fully support the MP2 Phone.
    SECN 2.x
    Successive point releases of the 2.0 firmware will include support for:
    • a softphone directory web page which will allow for local registration and management of SIP-enabled devices to a master Mesh Potato allowing for small-scale local directory management and services for VoIP
    • local instant messaging support via XMPP through the integration of the Prosody jabber server
    • integration of a Twitter Bootstrap based UI which will make for faster and more intuitive configuration interface.
    SECN 3.0
    The 3.0 release of the SECN firmware will be coordinated with the release of the Barrier Breaker of OpenWrt.  It will also include the most recent updates to the Batman-adv mesh protocol.

    by steve at September 29, 2014 02:30 PM

    Richard Hughes, ColorHug

    Shipping larger application icons in Fedora 22

    In GNOME 3.14 we show any valid application in the software center with an application icon of 32×32 or larger. Currently a 32×32 icon has to be padded with 16 pixels of whitespace on all 4 edges, and also has to be scaled x2 to match other UI elements on HiDPI screens. This looks very fuzzy and out of place and lowers the quality of an otherwise beautiful installing experience.

    For GNOME 3.16 (Fedora 22) we are planning to increase the minimum icon size to 48×48, with recommended installed sizes of 16×16, 24×24, 32×32, 48×48 and 256×256 (or SVG in some cases). Modern desktop applications typically ship multiple sizes of icons in known locations, and it’s very much the minority of applications that only ship one small icon.

    Soon I’m going to start nagging upstream maintainers to install larger icons than 32×32. If you’re re-doing the icon, please generate a 256×256 or 64×64 icon with alpha channel, as the latter will probably be the minimum size for F23 and beyond.

    At the end of November I’ll change the minimum icon size in the AppStream generator used for Fedora so that applications not fixed will be dropped from the metadata. You can of course install the applications manually on the command line, but they won’t be visible in the software center until they are installed.

    If you’re unclear on what needs to be done in order to be listed in the AppStream metadata, refer to the guidelines or send me email.

    by hughsie at September 29, 2014 11:59 AM

    September 28, 2014

    Bunnie Studios

    Name that Ware, September 2014

    The Ware for September 2014 is shown below.

    This months ware has a little bit of a story behind it, so I’ll give you this much about it to set up the story: it’s a USB protocol analyzer of some sort. Question is, what make and model?

    Now for the story.

    Name that ware is typically about things that cross my desk and get opened for one reason or the other — sometimes simply curiosity, sometimes more than that. This is a case where it was more than curiosity.

    Turns out this analyzer broke at an inopportune moment. Xobs was working on a high-priority reverse engineering task that required some USB analysis. Unfortunately, when we plugged in the analyzer, it just reported a series of connect/disconnect events but no data. We initially suspected a driver issue, but after connecting the analyzer to a previously known good configuration, we suspected a hardware failure.

    So, it was time to take the unit apart and figure out how to repair it. Of course, this is a closed-source device (still eagerly anticipating my OpenVizsla) so there are no schematics available. No worries; you’ll often hear me make the claim that it’s impossible to close hardware because you can just read a circuit board and figure out what’s going on. This particular ware was certainly amenable to that, as the construction is a four-layer board with a relatively simple assortment of chips on one side only.

    The USB analysis front-end consists of three major types of chip, outlined below.

    The chips in the red boxes are a pair of LMH6559 1.75GHz bandwidth amplifiers. Fortunately the top marking, “B05A”, was resolvable with a google search plus a few educated guesses as to the function of the chips. The chip in the yellow box is a Fairchild USB1T11A, a full-speed USB transceiver. And the chip in the green outline box is a Microchip (formerly SMSC) USB3300, a high-speed USB to ULPI transceiver. A casual read of the four-layer PCB indicates that the USB signal is passed through from the B-type port to the A-type port, with the LMH6559 acting as buffers to reduce loading, plus a resistor network of some type to isolate the USB1T11A. We figured that the most likely cause of the issue was electrical overstress on the LMH6559′s, since they lay naked to the USB port and quite possibly we could have shorted the data wires to a high voltage at some point in time, thereby damaging the buffers. We did a couple of quick tests and became semi-convinced that these were actually working just fine.

    Most likely the issue wasn’t the USB1T11A; it’s well-isolated. So the next candidate was the USB3300. Fortunately these were in stock at Element14 in Singapore and just a few bucks each, so we could order at 4:30PM and have it delivered the next morning to our office for a very nominal delivery fee.

    After replacing this chip, I was pleased to find that the unit came back alive again. I have to say, I’ve found the hot-air rework skills I learned at Dangerous Prototype’s hacker camp to be incredibly useful; this level of rework is now child’s play for me. I’m not quite sure how we damaged the USB3300 chip in the first place, but one possibility is that someone tried plugging something into the mini-DIN connector on the analyzer that wasn’t meant to be plugged into the device.

    And so, despite this being a closed-source device, it ended up being repairable, although it would have been much more convenient and required a lot less guesswork to fix it had schematics been made available.

    Significantly, the maker of this box was acutely aware of the fact that hardware is difficult to close and attempted to secure their IP by scrubbing the original markings off of the FPGA. An inspection under the microscope shows the surface texture of the top-part of the chip does not match the edges, a clear sign of reprocessing.

    For what it’s worth, this is the sort of thing you develop an eye for when looking for fake chips, as often times they are remarked, but in this case the remarking was done as a security measure. The removal of the original manufacturer’s markings isn’t a huge impediment, though; if I cared enough, there are several ways I could try to guess what the chip was. Given the general age of the box, it’s probably either a Spartan 3 or a Cyclone II of some type. Based on these guesses, I could map out the power pin arrangement and run a cross-check against the datasheets of these parts, and see if there’s a match. Come to think of it, if someone actually does this for Name that Ware based on just these photos, I’ll declare them the winner over the person who only guesses the make and model of the analyzer. Or, I could just probe the SPI ROM connected to the FPGA and observe the bitstream, and probably figure out which architecture, part and density it was from there.

    But wait, there’s more to the story!

    It turns out the project was pretty urgent, and we didn’t want to wait until the next day for the spare parts to arrive. Fortunately, my Tek MDO4000B has the really useful ability to crunch through analog waveforms and apply various PHY-level rules to figure out what’s going on. So, on any given analog waveform, you can tell the scope to try rules for things like I2C, SPI, UART, USB, etc. and if there’s a match it will pull out the packets and give you a symbolic analysis of the waveform. Very handy for reverse engineering! Or, in this case, we hooked up D+ to channel 1 and D- to channel 2, and set the “bus” trace to USB full-speed, and voila — protocol analysis in a pinch.

    Above in a screenshot of what the analysis looks like. The top quarter of the UI is the entire capture buffer. The middle of the window is a 200x zoom of the top waveform, showing the analog representation of the D+ and D- lines as the cyan and yellow traces. And just below the yellow trace, you will see the “B1″ trace which is the scope’s interpretation of the analog data as a USB packet, starting with a SYNC. Using this tool, we’re able to scroll left and right through the entire capture buffer and read out transactions and data. While this isn’t a practical way to capture huge swathes of data, it was more than enough for us to figure out at what point the system was having trouble, and we could move on with our work.

    While the Tek scope’s analysis abilities made fixing our USB analyzer a moot point, I figured I’d at least get a “Name that Ware” post out of it.

    by bunnie at September 28, 2014 07:17 PM

    Winner, Name that Ware August 2014

    The ware for August 2014 was a Dell PowerEdge PE1650 Raid controller. Thanks again to Oren Hazi for contributing the ware! Also, the winner is Bryce C, for being the first to correctly identify make and model. Congrats, email me for your prize!

    by bunnie at September 28, 2014 07:17 PM

    Village Telco

    Introducing Wildernets

    WN-204x264The following is a guest post from Keith Williamson.

    Wildernets is alternative firmware for the MP02 that aims to widen the customer base for the MP02 by making initial configuration much easier and adding new features such as Instant Messaging support. So even if you are comfortable operating a SECN 2.X network (as most are on this forum), you may find some of the Wildernets features of interest.

    Wildernets is based on the latest version of SECN 2.1 but simplifies both the initial and ongoing configuration making it possible for a user with few technical skills to get the network up and running quickly. In addition to SIP and POTS telephony, Wildernets supports Instant Messaging and local Web content service. Wildernets firmware is complementary to SECN 2.X firmware. It targets a slightly different user base than the traditional VillageTelco user but there is certainly a lot of overlap. Deployment assumptions for SECN firmware have generally included an entrepreneur with the technical “chops” to roll out the network and a user base that may have never had access to even basic telephony services. As we added support for softphone client software on smartphones, tablets, and laptops, the SECN network started to become useful for limited emergency services communications and small groups of users who need communications services in environments that are outside the range of traditional PSTN and cellular services. These users are likely to own smartphones, tablets, and laptops that become much less useful in those environments. With a Wildernets network, these devices become very useful again for calling or instant messaging other people on the network and browsing local Web content. Generally, these users already know the basics of downloading, installing, and using Internet applications on their devices but likely don’t know how to setup networks with IP addresses, netmasks, gateways and application services such as telephony and IM servers. Wildernets goal is to remove that impediment.

    For more information check out the Wildernets project page.



    by Keith Williamson at September 28, 2014 03:55 PM

    September 25, 2014

    Richard Hughes, ColorHug

    AppStream Progress in September

    Last time I blogged about AppStream I announced that over 25% of applications in Fedora 21 were shipping the AppData files we needed. I’m pleased to say in the last two months we’ve gone up to 45% of applications in Fedora 22. This is thanks to a lot of work from Ryan and his friends, writing descriptions, taking screenshots and then including them in the fedora-appstream staging repo.

    So fedora-appstream doesn’t sound very upstream or awesome. This week I’ve sent another 39 emails, and opened another 42 bugs (requiring 17 new bugilla/trac/random-forum accounts to be opened). Every single file in the fedora-appstream staging repo has been sent upstream in one form or another, and I’ve been adding an XML comment to each one for a rough audit log of what happened where.

    Some have already been accepted upstream and we’re waiting for a new tarball release; when that happens we’ll delete the file from fedora-appstream. Some upstreams are really dead, and have no upstream maintainer, so they’ll probably languish in fedora-appstream until for some reason the package FTBFS and gets removed from the distribution. If the package gets removed, the AppData file will also be deleted from fedora-appstream.

    Also, in the process I’ve found lots of applications which are shipping AppData files upstream, but for one reason or another are not being installed in the binary rpm file. If you had to tell me I was talking nonsense in an email this week, I apologize. For my sins I’ve updated over a dozen packages to the latest versions so the AppData file is included, and fixed a quite a few more.

    Fedora 22 is on track to be the first release that mandates AppData files for applications. If upstream doesn’t ship one, we can either add it in the Fedora package, or in fedora-appstream.

    by hughsie at September 25, 2014 01:15 PM

    September 24, 2014

    September 23, 2014

    Free Electrons

    Brevets logiciels : lettre aux députés européens

    La menace des brevets logiciels est de retour, à travers le projet de « Brevet Unitaire » qui est actuellement à l’étude à la Commission Juridique du Parlement Européen. En quelques mots, il s’agit de confier à l’Office Européen des Brevets (OEB) le soin de définir ce qui est brevetable et ce qui ne l’est pas. Or l’OEB est bien connu pour être favorable aux brevets logiciels, et échappe également à tout contrôle démocratique.

    Vous trouverez plus d’informations sur le site ainsi que sur le site Stop Software Patents.

    Après les dernières batailles sur les brevets logiciels en 2005, il était temps de reprendre ma plume pour essayer de sensibiliser nos représentants aux dangers que représentent les brevets logiciels. La lettre ci-dessous a été envoyée la semaine dernière à chacun des membres de la commission juridique, en version anglaise ou en version française.

    Il ne s’agit pas d’une étude juridique approfondie sur les brevets logiciels, car je ne suis qu’un ingénieur, ne disposant pas de connaissances juridiques avancées. Il s’agit plutôt du témoignage de mes inquiétudes par rapport à ces brevets, inquiétudes légitimées par les nombreuses dérives observées de par le monde depuis de longues années, et par les multiples pressions que nos législateurs ont subies pour légaliser les brevets logiciels en Europe.

    Il n’est peut-être pas trop tard pour écrire à vos représentants aux parlement Européen, mais en tout cas, il n’est pas trop tard pour signer la pétition que de nombreuses entreprises et de nombreux particuliers ont signée.

    Madame, Monsieur le député européen,

    Je suis le créateur et dirigeant de Free Electrons, une jeune société d’ingénierie en informatique embarquée, qui accompagne des entreprises du monde entier dans la conception de systèmes embarqués, sur un marché à forte croissance.

    C’est la disponibilité d’un grand nombre de briques logicielles Open-Source qui a permis à notre société de connaître une croissance continue depuis sa création en 2004. De très nombreux produits industriels et d’électronique grand public sont conçus à partir de ces briques, développées en commun par une communauté de développeurs dans le monde entier, à laquelle notre entreprise participe.

    Or, cette dynamique aurait pu être atténuée si les brevets logiciels avaient été légaux dans l’UE, comme ils le sont aux États-Unis et au Japon. Par leur nombre et souvent leur trivialité, ces brevets constituent un véritable « champ de mines » pour tout créateur de logiciels et de système embarquant du logiciel. Pour une société aux moyens limités, il est en effet impossible de s’assurer que les idées que l’on met en pratique en programmant, ou les composants logiciels qu’on réutilise, ne «posent pas le pied» sur méthode brevetée par un tiers. Le créateur d’un produit innovant à base de matériel et de logiciel court alors le risque de voir son investissement ruiné par un concurrent plus gros auquel son invention ferait de l’ombre. Ce concurrent, s’il possède un jeu de brevets suffisamment fourni, pourra toujours trouver un brevet logiciel trivial enfreint par le produit concurrent, et faire arrêter la diffusion du produit. Un autre danger vient aussi de sociétés (« Patent trolls ») qui ne créent aucun produit et ne vivent que par la recherche de victimes qui enfreignent des brevets dans leurs produits.

    Il est ainsi inquiétant de constater qu’au moins dans le domaine du logiciel, les brevets sont détournés de leur vocation première de favoriser l’innovation. C’est tout le contraire qui se produit, et il semble que les brevets ne sont aujourd’hui plus qu’un instrument pour des sociétés géantes pour lutter contre leurs concurrents, gros comme petits, et s’opposer à la distribution de produits concurrents. Les premiers brevets accordaient un monopole provisoire en échange de la révélation d’un secret de fabrication. Pour de nombreux brevets logiciels, comme le fameux « double clic » breveté par Microsoft, il n’y a plus de secret de fabrication à révéler, tant leurs effets sont faciles à reproduire. Pourtant, on continue d’accorder un monopole pour ces brevets.

    Notre entreprise est donc inquiète des projets en cours pour installer un brevet unitaire dans l’UE, accompagné d’une cour unifiée des brevets.

    Nous sommes préoccupés du fait que le règlement sur le brevet unitaire, selon l’accord obtenu en décembre 2011 par les négociateurs du Conseil, de la Commission et de la commission des affaires juridiques du Parlement européen, abandonne toute question à propos des limites de la brevetabilité à la jurisprudence de l’Office Européen des Brevets (OEB), sans contrôle démocratique, ni recours devant un tribunal indépendant.

    Pourtant, au mépris du rejet que le Parlement européen a exprimé dans ses votes du 24 septembre 2003 et du 6 juillet 2005, l’OEB a continué à délivrer des brevets logiciels, sous l’appellation trompeuse d’« inventions mises en œuvre par ordinateur ». C’est peut-être parce que l’OEB, échappant à tout contrôle démocratique, a un intérêt financier à délivrer un maximum de brevets, et ainsi alimenter une augmentation des litiges, qui profite aux cabinets d’avocats mais décourage l’innovation, qui pourtant est le principal moteur de notre économie.

    Le règlement sur le brevet unitaire est une opportunité pour les législateurs de l’UE d’harmoniser le droit matériel des brevets dans le cadre institutionnel et juridique de l’UE, et de mettre fin aux pratiques intéressées de l’OEB qui étendent le domaine de la brevetabilité aux logiciels. Si cela échoue, le brevet unitaire sera plus préjudiciable que bénéfique pour les entreprises informatiques européennes.

    Pour ces raisons, nous incitons vivement les législateurs à adopter des amendements qui énoncent clairement que les décisions de l’OEB sont sujettes à un recours devant la Cour de justice de l’Union européenne et réaffirment le rejet des brevets logiciels exprimé par les votes du Parlement Européen.

    N’hésitez pas à me contacter si vous le souhaitez.

    Bien cordialement,

    Michael Opdenacker

    by Michael Opdenacker at September 23, 2014 01:53 PM

    Bonne année 2014 – Accomplissements en 2013

    Cet article est également publié sur notre bulletin d’actualités trimestrielles.

    Toute l’équipe de Free Electrons vous présente ses meilleurs voeux pour l’année 2014, pleine d’optimisme et d’énergie !

    Nous profitons de l’occasion pour vous donner des nouvelles de Free Electrons.

    En 2013, Free Electrons a augmenté sa contribution aux projets Open Source, surtout au niveau du noyau Linux.

    639 patches ont été intégrés au noyau Linux, principalement pour améliorer le support des processeurs ARM de Marvell et d’Allwinner. Pour toutes les versions de Linux publiées en 2013, Free Electrons a fait partie des 30 premières sociétés en termes de contributions (nombre de commits). Nous avons maintenant une expérience forte dans l’intégration du support de processeurs ARM dans le noyau Linux, et nous espérons encore déveloper notre activité dans ce domaine en 2014.

    595 patches ont été intégrés à Buildroot, un système de compilation automatique pour systèmes embarqués, et ceci dans un grand nombre de domaines, faisant de Free Electrons le deuxième plus important contributeur après le mainteneur de Buildroot. Ce travail permet à Free Electrons de tenir à jour son expertise en compilation croisée et en outils de compilation de systèmes de fichiers embarqués.

    26 patches intégrés au chargeur de démarrage Barebox :

    22 patches au « layer » pour Freescale dans Yocto, principalement pour prendre en charge les cartes embarquées de Crystalfontz. Nous en avons profité pour développer un nouveau type d’image et d’importantes améliorations ont été apportées à la recette de compilation pour Barebox.

    Certaines de ces contributions, ainsi que bien d’autres activités, ont été réalisées dans le cadre de projets de développement et de conseil en 2013, en particulier :

    • Développement de code pour le noyau Linux, ajoutant à la version officielle du noyau Linux le support de processeurs ARM ou de cartes embarquées de nos clients, en particulier sur les processeurs de Marvell et de Freescale.
    • Développement noyau, de pilotes de périphériques et intégration dans un système de compilation pour l’embarqué, pour un appareil médical basé sur un processeur SAMA5 d’Atmel.
    • Développement de pilotes de périphériques pour noyau Linux pour des émetteurs-récepteurs à radio-fréquences, sur une plateforme domotique à base d’Atmel SAMA5.
    • Projets de réduction du temps de démarrage.
    • Projets de conseil et d’audit autour de Buildroot.

    Nous avons également amélioré et mis à jour de façon significative nos formations :

    • Notre formation développement de pilotes de périphériques noyau Linux a été mise à jour pour utiliser la plateforme BeagleBone Black, pour couvrir l’utilisation du Device Tree sur plateforme ARM, et pour utiliser un périphérique I2C amusant pour illustrer le développement d’un pilote de périphériques dans nos travaux pratiques.
    • Notre formation Android : développement système a été mise à jour vers Android 4.x, et pour utiliser la BeagleBone Black comme plateforme de développement dans les travaux pratiques.
    • Notre formation Linux embarqué a été mise à jour pour utiliser des versions plus récentes du noyau Linux, en particulier pour couvrir l’utilisation du Device Tree sur plateformes ARM.

    Nos supports de formation restent librement disponibles sous licence Creative Commons, y-compris leur code source, disponible via un dépôt Git public.

    Free Electrons continue de croire à l’importance pour ses ingénieurs de participer aux conférences techniques, pour les tenir au courant des derniers développements autour de Linux et pour renforcer les liens avec les développeurs de la communauté, qui permettent à nos projets d’avancer plus vite. Pour cette raison, nous avons participé à un grand nombre de conférences en 2013 :

    • FOSDEM 2013, à Bruxelles, Belgique. Notre directeur technique et ingénieur Thomas Petazzoni a donné une présentation sur le développement noyau sur ARM.
    • Buildroot Developers Meeting, Bruxelles, Belgique. Notre ingénieur Thomas Petazzoni a organisé et à participé à cet événement, parrainé par Google, autour du développement de Buildroot.
    • Embedded Linux Conference 2013 et Android Builders Summit 2013, à San Francisco, États-Unis. Notre ingénieur Grégory Clement a donné une présentation sur l’infrastructure de gestion des horloges (« clock framework ») dans le noyau Linux. Notre ingénieur Thomas Petazzoni a donné une présentation sur le développement noyau sur ARM. Voir aussi nos vidéos.
    • Linaro Connect Europe 2013, Dublin, Irlande. Notre ingénieur Thomas Petazzoni a participé à de nombreuses discussions autour du support des processeurs ARM dans le noyau Linux.
    • Linux Plumbers 2013, Nouvelle Orléans, États-Unis. Notre ingénieur Maxime Ripard a assisté à la conférence, et a participé à des discussions autour du développement noyau et Android.
    • Kernel Recipes, Paris, France. Notre Directeur Michael Opdenacker et notre Directeur Technique Thomas Petazzoni ont participé à cette conférence sur le noyau Linux, et Thomas a donné deux présentations: une sur le développement noyau sur ARM et une sur Buildroot.
    • ARM kernel mini-summit 2013, Édimbourg, Royaume-Uni. Nos ingénieurs Grégory Clement, Thomas Petazzoni et Maxime Ripard ont participé au mini-sommet sur le noyau ARM, réservé aux développeurs principaux sur ARM. Ce sommet est un l’endroit où se discutent et se définissent les directions à prendre pour le support des processeurs ARM dans le noyau Linux.
    • Embedded Linux Conference Europe, Édimbourg, Royaume-Uni. Grégory Clement a donné une présentation sur l’infrastructure de gestion des horloges dans le noyau Linux et Thomas Petazzoni a donné une présentation sur le Device Tree.
    • Buildroot Developers Meeting, Édimbourg, Royaume-Uni. Notre ingénieur Thomas Petazzoni a organisé et participé à cet événement de 2 jours, parrainé par Imagination Technologies, sur le développement de Buildroot.

    Un développement très important pour Free Electrons en 2013 est l’embauche d’un nouvel ingénieur dans notre équipe : Alexandre Belloni nous a rejoint en mars 2013. Alexandre a une expérience très significative en Linux embarqué et en développement noyau. Plus d’informations sur son profil.

    Abordons maintenant nos projets pour 2014 :

    • Recruter plusieurs nouveaux ingénieurs. Un d’entre eux a déjà signé et nous rejoindra en avril, en apportant une solide expérience en développement noyau, y-compris en contribution au noyau officiel.
    • Notre implication dans le support des processeurs ARM dans le noyau Linux se développera de manière substantielle.
    • Deux nouvelles formations seront offertes : une formation sur la « Réduction du temps de démarrage » et une formation sur « OpenEmbedded et Yocto ».
    • Pour la première fois, nous organiserons des sessions inter-entreprises (ouvertes à inscription individuelle) en dehors de France.
      • Notre prochaine session en anglais sur « Android : développement système » se tiendra du 14 au 17 avril à Southampton, Royaume-Uni
      • Nous préparons également des sessions sur Linux embarqué et sur le noyau Linux aux États Unis, qui devraient être annoncées dans les semaines qui viennnent.
      • Nous projetons également d’organiser des sessions sur les mêmes sujets en Allemagne, avec des formateurs germanophones.
      • Au passage, nos formations en français sur Android continueront à être données à Toulouse, mais il y aura aussi une session du 1er au 4 avril à Lyon.

      Vous pouvez consulter the la liste complète de nos sessions inter-entreprises.

    Tout comme en 2013, nous participerons à plusieurs des plus importantes conférences techniques: Linux Conf Australia (Janvier 2014), FOSDEM (Février 2014), Embedded Linux Conference (Avril 2014) et Embedded Linux Conference Europe (Octobre 2014).

    Vous pouvez suivre les actualités de Free Electrons en lisant notre blog et en suivant nos nouvelles brèves sur Twitter. Nous avons maintenant aussi une page Google+.

    Une fois de plus, Bonne Année 2014 !

    Toute l’équipe de Free Electrons.

    by Michael Opdenacker at September 23, 2014 01:49 PM

    Actualités trimestrielles Free Electrons: mai 2014

    Cet article est également publié sur notre bulletin d’actualités trimestrielles.

    Free Electrons a le plaisir de partager avec vous des nouvelles des activités de formation et de contribution de la société.

  • Prise en charge d’une nouvelle plateforme ARM : l’exemple des processeurs Allwinner – Maxime Ripard
  • by Michael Opdenacker at September 23, 2014 01:49 PM

    Actualités trimestrielles Free Electrons: septembre 2014

    Cet article est également publié sur notre bulletin d’actualités trimestrielles.

    Free Electrons a le plaisir de partager avec vous des nouvelles des activités de formation et de contribution de la société.

    Contributions au noyau Linux

    Depuis notre dernier bulletin d’informations, nos ingénieurs ont continué à faire d’importantes contributions au noyau Linux, en particulier dans le domaine du support des processeurs ARM et des plateformes utilisant ceux-ci.

    • 218 patches de Free Electrons ont été intégrés à Linux 3.15, ce qui place Free Electrons au 12ème rang des sociétés qui ont contribué à cette version, en nombre de patches. Voir notre billet de blog.
    • 388 patches ont été acceptés dans Linux 3.16, ce qui fait de Free Electrons la 7ème société contributrice à cette version. Voir notre billet de blog.
    • Pour la version 3.17 à venir, nous avons déjà intégré 146 patches, et nous avons beaucoup de travail en cours pour les versions suivantes.

    Voici nos principales contributions :

    • L’ajout d’un pilote ubiblk, qui permet d’utiliser des systèmes de fichiers traditionnels au dessus de devices UBI, et donc sur du stockage flash de type NAND. Il n’y a que le mode en lecture seule qui est pris en charge, mais cela permet déjà d’utiliser SquashFS, un système de fichiers très performant, de manière sécurisée sur de la flash de type NAND.
    • Un autre ajout est celui des nouveaux processeurs Marvell Armada 375 et Armada 38x. En seulement deux versions (sorties des versions 3.15 et 3.16), nous avons pratiquement poussé le support complet de ces nouveaux processeurs. Le pilote réseau pour l’Armada 375 est une pièce manquante, qui apparaîtra dans la version 3.17.
    • Notre travail de maintenance sur les processeurs AT91 et SAMA5 d’Atmel s’est poursuivi, avec davantage de conversions au Device Tree, au Common Clock Framework, et à d’autres mécanismes modernes du noyau. Nous avons également développé le pilote graphique DRM/KMS pour le SoC SAMA5D3, qui a déjà été publié et qui devrait être intégré prochainement si tout va bien.
    • Notre travail pour prendre en charge le processeur Marvell Berlin a commencé à être intégré à Linux 3.16. Ce processeur est utilisé dans divers téléviseurs, lecteurs multimédia ou dans des petits appareils comme la Google Chromecast. Un support de base a été intégré, comprenant les Device Trees, les pilotes d’horloges, le pilote pinmux, ainsi que la prise en charge des GPIO et de SDHCI. Le support d’AHCI devrait arriver en 3.17, et le support USB et réseau est attendu pour 3.18.
    • Le travail sur la prise en charge des SoCs d’Allwinner s’est poursuivi, en particulier sur le processeur A31 : prise en charge de SPI et I2C, pilotes pour le bus bus et pour le contrôleur PRCM, et le support de l’USB.

    Nous disposons maintenant d’une expérience assez complète en écriture de pilotes pour le noyau et intégration de code dans les sources officielles du noyau. N’hésitez pas à nous contacter si vous avez besoin d’aide pour développer des pilotes pour le noyau Linux, ou pour prendre en charge une nouvelle carte ou un nouveau processeur.

    Contributions à Buildroot

    Notre implication dans le projet Buildroot, un des plus populaires outils de compilation de systèmes de fichiers pour l’embarqué, s’est poursuivie. Nous avons intégré 159 patches à la version 2014.05 du projet (sur un total de 1293 patches), et 129 patches à la version 2014.08 (sur un total de 1353 patches). De surcroît, notre ingénieur Thomas Petazzoni joue souvent le rôle de mainteneur par intérim, quand Peter Korsgaard, le mainteneur, n’est pas disponible. Voici les fonctionnalités principales que nous avons ajoutées : améliorations majeures de la prise en charge de Python 3, ajout de chargeurs de démarrage EFI, support de la bibliothèque C Musl.

    Projets Linux embarqué

    Bien-sûr, nous avons également conduit des projets de développement Linux embarqué et de réduction du temps de démarrage pour divers fabricants de systèmes embarqués, avec un impact moins visible sur les projets de la communauté. Cependant, nous essaierons de partager l’expérience générique que nous avons pu acquérir via de futurs billets de blog.

    Nouvelle formation : Yocto Project et OpenEmbedded

    Un grand nombre de projets Linux embarqué utilisent des systèmes de compilation automatique qui intègrent les divers composants d’un système dans une image de systèms de fichiers prête à l’emploi. Parmi les solutions existantes, Yocto Project et OpenEmbedded sont très en vogue.

    Nous avons ainsi développé une nouvelle formation de 3 jours, Yocto Project et OpenEmbedded, pour aider les ingénieurs et les sociétés qui utilisent, ou sont intéressés par utiliser ces solutions pour leur projets Linux embarqué. En commençant par la compréhension des principes de base de Yocto, la formation rentre dans les détails de l’écriture de recettes de paquetages, de la prise en charge d’une carte par Yocto, de la création d’images sur mesure, etc.

    Le programme détaillé de la formation est disponible. Vous pouvez commander une session sur site, ou bien participer à notre première session inter-entreprises organisée à Toulouse du 18 au 20 novembre.

    Mise à jour de la formation Linux embarqué

    L’écosystème de Linux embarqué évolue très rapidement, et donc nous mettons constamment à jour nos formations vis à vis des derniers développements. Dans le cadre de cet effort, nous avons récemment procédé à une mise à jour majeure de notre formation Linux embarqué : le matériel utilisé dans les travaux pratiques est passé à la populaire et intéressante carte Atmel Xplained SAMA5D3, et de nombreux travaux pratiques ont été améliorés pour un apprentissage plus facile. Voir notre billet de blog pour plus de détails.

    Liste de discussion pour les participants à nos formations

    Nous avons mis en place un nouveau service pour les participants à nos sessions de formation : une liste discussion qui leur est dédiée, et sur laquelle ils peuvent poser toutes questions supplémentaires après la formation, partager leur expérience, et se mettre en contact avec d’autres participants et avec les ingénieurs de Free Electrons. Bien-sûr, tous les ingénieurs de Free Electrons sont sur la liste et participent aux discussions. Encore un service utile offert par nos sessions de formation !

    Voir plus de détails.

    Conférences : ELC, ELCE, Kernel Recipes

    L’équipe d’ingénierie de Free Electrons participera aux conférences Embedded Linux Conference Europe et Linux Plumbers, le mois prochain à Düsseldorf en Allemagne. Plusieurs ingénieurs de Free Electrons donneront également des présentations à ELCE :

    De surcroît, Thomas participera aussi au Buildroot Developers Day, qui se tiendra à Düsseldorf juste avant l’Embedded Linux Conference Europe.

    Voir aussi notre billet de blog sur ELCE pour plus de détails.

    Maxime Ripard et Michael Opdenacker participeront également à la conférence Kernel Recipes 2014 à Paris, du 25 au 26 septembre. Maxime donnera sa présentation noyau Allwinner à la conférence. Voir notre billet de blog pour plus de détails.

    Enfin, nous avons récemment publié les vidéos d’un certain nombre de présentations à l’Embedded Linux Conference, qui s’est tenue en avril à San Jose. Cela représente une bonne quantité d’informations intéressantes sur Linux embarqué ! Voyez vous-même sur notre billet de blog.

    Prochaines sessions de formation

    Nous proposons un certain nombre de sessions de formation en inter-entreprises, dans lesquelles nous avons encore des places disponibles :

    Sessions et dates

    by Michael Opdenacker at September 23, 2014 08:42 AM

    Altus Metrum

    keithp&#x27;s rocket blog: easymega-118k

    Neil Anderson Flies EasyMega to 118k' At BALLS 23

    Altus Metrum would like to congratulate Neil Anderson and Steve Cutonilli on the success the two stage rocket, “A Money Pit”, which flew on Saturday the 20th of September on an N5800 booster followed by an N1560 sustainer.

    “A Money Pit” used two Altus Metrum EasyMega flight computers in the sustainer, each one configured to light the sustainer motor and deploy the drogue and main parachutes.

    Safely Staged After a 7 Second Coast

    After the booster burned out, the rocket coasted for 7 seconds to 250m/s, at which point EasyMega was programmed to light the sustainer. As a back-up, a timer was set to light the sustainer 8 seconds after the booster burn-out. In both cases, the sustainer ignition would have been inhibited if the rocket had tilted more than 20° from vertical. During the coast, the rocket flew from 736m to 3151m, with speed going from 422m/s down to 250m/s.

    This long coast, made safe by EasyMega's quaternion-based tilt sensor, allowed this flight to reach a spectacular altitude.

    Apogee Determined by Accelerometer

    Above 100k', the MS5607 barometric sensor is out of range. However, as you can see from the graph, the barometric sensor continued to return useful data. EasyMega doesn't expect that to work, and automatically switched to accelerometer-only apogee determination mode.

    Because off-vertical flight will under-estimate the time to apogee when using only an accelerometer, the EasyMega boards were programmed to wait for 10 seconds after apogee before deploying the drogue parachute. That turned out to be just about right; the graph shows the barometric data leveling off right as the apogee charges fired.

    Fast Descent in Thin Air

    Even with the drogue safely fired at apogee, the descent rate rose to over 200m/s in the rarefied air of the upper atmosphere. With increasing air density, the airframe slowed to 30m/s when the main parachute charge fired at 2000m. The larger main chute slowed the descent further to about 16m/s for landing.

    September 23, 2014 04:33 AM

    September 19, 2014

    Video Circuits

    Video Workshop

    Here are pics of the analogue video workshop packs each attendee will be getting tomorrow!

    by Chris ( at September 19, 2014 05:14 PM

    September 13, 2014

    Altus Metrum

    keithp&#x27;s rocket blog: Altos1.5

    AltOS 1.5 — EasyMega support, features and bug fixes

    Bdale and I are pleased to announce the release of AltOS version 1.5.

    AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

    This is a major release of AltOS, including support for our new EasyMega board and a host of new features and bug fixes

    AltOS Firmware — EasyMega added, new features and fixes

    Our new flight computer, EasyMega, is a TeleMega without any radios:

    • 9 DoF IMU (3 axis accelerometer, 3 axis gyroscope, 3 axis compass).

    • Orientation tracking using the gyroscopes (and quaternions, which are lots of fun!)

    • Four fully-programmable pyro channels, in addition to the usual apogee and main channels.

    AltOS Changes

    We've made a few improvements in the firmware:

    • The APRS secondary station identifier (SSID) is now configurable by the user. By default, it is set to the last digit of the serial number.

    • Continuity of the four programmable pyro channels on EasyMega and TeleMega is now indicated via the beeper. Four tones are sent out after the continuity indication for the apogee and main channels with high tones indicating continuity and low tones indicating an open circuit.

    • Configurable telemetry data rates. You can now select among 38400 (the previous value, and still the default), 9600 or 2400 bps. To take advantage of this, you'll need to reflash your TeleDongle or TeleBT.

    AltOS Bug Fixes

    We also fixed a few bugs in the firmware:

    • TeleGPS had separate flight logs, one for each time the unit was turned on. Turning the unit on to test stuff and turning it back off would consume one of the flight log 'slots' on the board; once all of the slots were full, no further logging would take place. Now, TeleGPS appends new data to an existing single log.

    • Increase the maximum computed altitude from 32767m to 2147483647m. Back when TeleMetrum v1.0 was designed, we never dreamed we'd be flying to 100k' or more. Now that's surprisingly common, and so we've increased the size of the altitude data values to fit modern rocketry needs.

    • Continuously evaluate pyro firing condition during delay period. The previous firmware would evaluate the pyro firing requirements, and once met, would delay by the indicated amount and then fire the channel. If the conditions had changed state, the channel would still fire. Now, the conditions are continuously evaluated during the delay period and if they change state, the event is suppressed.

    • Allow negative values in the pyro configuration. Now you can select a negative speed to indicate a descent rate or a negative acceleration value to indicate acceleration towards the ground.

    AltosUI and TeleGPS — EasyMega support, OS integration and more

    The AltosUI and TeleGPS applications have a few changes for this release:

    • EasyMega support. That was a simple matter of adapting the existing TeleMega support.

    • Added icons for our file types, and hooked up the file manager so that AltosUI, TeleGPS and/or MicroPeak are used to view any of our data files.

    • Configuration support for APRS SSIDs, and telemetry data rates.

    September 13, 2014 06:47 PM

    September 10, 2014


    More lenses tested: Evetar N123B05425W vs. Sunex DSL945D

    We just tested two samples of Evetar N123B05425W lens that is very similar to Sunex DSL945D described in the previous post.

    Lens Specifications

    Sunex DSL945D Evetar N123B05425W
    Focal length 5.5mm 5.4mm
    F# 1/2.5 1/2.5
    IR cutoff filter yes yes
    Lens mount M12 M12
    image format 1/2.3 1/2.3
    Recommended sensor resolution 10Mpix 10MPix

    Lens comparison

    Both lenses are specified to work with 10 megapixel sensors, so it is possible to compare “apples to apples”. This performance comparison is based only on our testing procedure and does not involve any additional data from the lens manufacturers, the lens samples were randomly selected from the purchased devices. Different applications require different features (or combination of features) of the lens, and both lenses have their advantages with respect to the other.

    Sunex lens has very low longitudinal chromatic aberration (~5μm) as indicated on “Astigmatism” (bottom left) graphs, it is well corrected so both red and blue curves are on the same side of the green one. Evetar lens have very small difference between red and green, but blue is more than 15 μm off. My guess is that the factory tried to make the lens that can work in “day/night” applications and optimized design for visible and infrared spectrum simultaneously. Sacrificing infrared (it anyway has no value in high resolution visible light applications) at the design stage could improve performance of this already very good product.

    Petzval field curvature of the DSL945D is slightly better than that of the N123B05425W, astigmatism (difference between the sagittal and the tangential focal shift for the same color) is almost the same with maximum of 18 μm at ~2 mm from the image center.

    Center resolution (mtf50% is shown) of the DSL945D is higher for each color, but only in the center. It drops for peripheral areas much quicker than the resolution of the N123B05425W does. Evetar lens has only sagittal (radial) resolution for blue component dropping below 100 lp/mm according to our measurements, and that gives this lens higher full-field weighted resolution values (top left plot on each figure).

    Lens testing data

    The graphs below and the testing procedure are described in the previous post. Solid lines correspond to the tangential and dashed – to the sagittal components for the radial aberration model, point marks account for the measured parameter variations in the different parts of the lenses at the same distance from the center.

    Sunex DSL945D

    Fig.1 Sunex SLR945D sample #1020 test results

    Fig.1 Sunex SLR945D sample #1020 test results. Spreadsheet link.

    Evetar N123B05425W

    Fig.2 Evetar  N123B05425W sample #9071 test results

    Fig.2 Evetar N123B05425W sample #9071 test results. Spreadsheet link.

    Fig.3 Evetar  N123B05425W sample #9072 test results

    Fig.3 Evetar N123B05425W sample #9072 test results. Spreadsheet link.


    by andrey at September 10, 2014 08:25 PM

    September 09, 2014

    Free Electrons

    Séminaire gratuit sur Android le 29 janvier à Gardanne

    Android robotLe programme Captronic organise un séminaire gratuit sur Android et son utilisation dans les systèmes embarqués. Celui-ci se tiendra le 29 janvier à Gardanne, près de Marseille, et sera présenté par mon collègue Maxime Ripard, qui est le créateur de notre formation sur le développement système avec Android.


    • Présentation générale d’Android
    • Opportunités d’utiliser Android dans des systèmes embarqués qui ne sont ni des téléphones ni des tablettes
    • Détails sur l’architecture d’Android et sa personnalisation
    • Code source et compilation
    • Modifications apportées par Android au noyau Linux
    • Chargeurs de démarrage pour Android
    • Prise en charge d’un nouveau matériel
    • L’organisation du système de fichiers d’Android
    • Les couches natives d’Android et l’appel d’un programme C depuis Android pour l’accès à un matériel spécifique
    • Introduction au développement d’applications
    • Personnalisation du système
    • Utilisation d’adb (Android Debug Bridge) pour la mise au point et le contrôle à distance du système.
    • Ressources et bonnes pratiques


    • Démonstrations de plusieurs aspects du développement de systèmes avec Android
      • Récupération des sources et compilation
      • Démonstration de l’émulateur d’Android
      • Démarrage d’Android sur une carte électronique à base de processeur ARM OMAP 3530, en utilisant une console série.
      • Prise en compte de boutons spécifiques. Exemple de la touche « Back ».
      • Utilisation d’adb : installation, accès au logs du système, accès à une ligne de commande sur l’équipement, échange de fichiers avec le PC.
      • Personnalisation du système : changer le nom du produit, le fond d’écran par défaut, rajouter une nouvelle propriété.
      • Pour l’accès à un matériel spécifique (un périphérique USB par exemple), développement d’une bibliothèque native et accès à cette fonctionnalité depuis le framework Android via une classe et une bibliothèque JNI spécifiques.
      • Présentation d’une application permettant de contrôler un périphérique USB.
    • Questions et réponses

    L’inscription est gratuite mais une inscription préalable est requise. Voir la page de Captronic sur ce séminaire.

    Par ailleurs, les transparents de ce séminaire sont disponibles dans leur intégralité. Cela vous permettra de vous assurer que le séminaire correspondra à vos attentes.

    by Michael Opdenacker at September 09, 2014 12:01 PM

    Michele's GNSS blog

    At ION GNSS+ 2014

    To whom it may happen to be in Tampa these days: I will also be around.

    Feel free to come and chat!

    by (Michele Bavaro) at September 09, 2014 11:04 AM

    September 07, 2014

    Video Circuits

    Tomislav Mikulic

    "Tomislav Mikulic is a Croatian computer Graphics pioneer who exhibited at the Tendencies 5 in Zagreb(former Yugoslavia) in 1973 at the age of 20. He had composed the First Yugoslav Computer Animation Film which had it’s premiere on 13th May 1976 in Zagreb.324"

    by Chris ( at September 07, 2014 09:01 AM

    September 06, 2014


    74HC4094 - 8-bit shift register : weekend die-shot

    74HC4094 is an 8-bit serial-in/parallel-out shift register.

    September 06, 2014 09:43 PM

    September 02, 2014


    NibbleKiosk: controlling chromium through sound

    **updated for verison 0.0.2**

    The idea of NibbleKiosk is to turn old monitors into interactive displays using simple hardware such as a Raspberry Pi with a microphone. The sounds received by the microphone are turned into URLs and sent to Chromium browser. The software comes with 3 programs:

    • one to create the sound files based on the URLs to be used by the client
    • one to create a database of URLs
    • the main program which does the signal processing and controlling of Chromium

    You first need to create a database of URLs:

    nibbledb -u -d test.db

    which outputs:

    test.db: key 1B95FB47 set to

    You can then create a sound file to use to trigger the URL:

    nibblewav 1B95FB47

    This will output a wav file with the same hex code in lowercase to your /tmp directory

    aplay /tmp/1b95fb47.wav

    and you should hear what it sounds like.

    You can now start the main program on the receiver. You should first start Chromium listening on port 9222

    chromium-browser --remote-debugging-port=9222&

    You are now ready to start the main program with the database you created earlier:

    nibblekiosk -d test.db

    This should now listen continually for the right sounds to trigger URLs on Chromium. You can build your own clients with the wav files you generate.

    There are number of variables to get a functioning system. A key variable is getting the right signal magnitude to trigger the system. You can use the -m flag to experiement with this. On a Raspberry Pi I have set this as low as -m 2, e.g.

    nibblekiosk -d test.db -m 2

    I have had good performance from the microphone on an old USB webcam or if you want something small for the Pi, Konobo makes a very small USB microphone.

    If you are feeling brave and want to try I have made some packages for Ubuntu (14.04) and Raspbian:

    The only dependencies are OpenAL and Berkeley DB.

    by john at September 02, 2014 08:15 PM

    Bunnie Studios

    Name that Ware, August 2014

    The Ware for August 2014 is below.

    Sorry this month’s ware is a little bit late, I’ve been offline for the past couple of weeks. Thanks to Oren Hazi for contributing this ware!

    by bunnie at September 02, 2014 05:12 PM

    Winner, Name that Ware July 2014

    The Ware for July 2014 is a GSM signal booster, bought over the counter from a white-label dealer in China. There were many thoughtful, detailed and correct responses, making it very hard to choose a winner. Lacking a better algorithm than first-closest response, wrm is the winner. Congrats, email me for your prize!

    by bunnie at September 02, 2014 05:10 PM

    August 30, 2014

    Video Circuits

    Richard Paul Lohse

    Picked up an exhibition catalogue of a Richard Paul Lohse show from 1970. There were some pretty interesting diagrams of the systems he used to construct his images. Similar concerns to early computer art/constructivist type stuff. Different image generation/process control systems are interesting me at the moment. from multi plane cameras, to the scanimate to digital software but somthing about doing things hands on like Lohse is still interesting.

    by Chris ( at August 30, 2014 09:14 AM

    August 29, 2014

    Richard Hughes, ColorHug

    Putting PackageKit metadata on the Fedora LiveCD

    While working on the preview of GNOME Software for Fedora 20, one problem became very apparent: When you launched the “Software” application for the first time, it went and downloaded metadata and then built the libsolv cache. This could take a few minutes of looking at a spinner, and was a really bad first experience. We tried really hard to mitagate this, in that when we ask PackageKit for data we say we don’t mind the cache being old, but on a LiveCD or on first install there wasn’t any metadata at all.

    So, what are we doing for F21? We can’t run packagekitd when constructing the live image as it’s a D-Bus daemon and will be looking at the system root, not the live-cd root. Enter packagekit-direct. This is an admin-only tool (no man page) installed in /usr/libexec that designed to be run when you want to run the PackageKit backend without getting D-Bus involved.

    For Fedora 21 we’ll be running something like DESTDIR=$INSTALL_ROOT /usr/libexec/packagekit-direct refresh in fedora-live-workstation.ks. This means that when the Live image is booted we’ve got both the distro metadata to use, and the libsolv files already built. Launching gnome-software then takes 440ms until it’s usable.

    by hughsie at August 29, 2014 07:04 PM

    Altus Metrum

    bdale&#x27;s rocket blog: EasyMega v1.0

    Keith and I are pleased to announce the immediate availability of EasyMega v1.0!

    EasyMega is effectively a TeleMega without the GPS receiver and radio telemetry system. TeleMega and EasyMega both have 6 pyro channels and enough sensors to lock out pyro events based on conditions like tilt-angle from vertical, making both boards ideal solutions for complex projects with air start or multi-stage engine ignition requirements. Choose TeleMega for a complete in-airframe solution including radio telemetry and GPS, or EasyMega if you already have a tracking solution you like and just need intelligent control of multiple pyro events.

    EasyMega is 2.25 x 1.25 inches (57.15 x 31.75 mm), which means it can be easily mounted in a 38 mm air frame coupler. The list price for EasyMega is $300, but as an introductory special, you can purchase one now through Labor Day for only $250! This special is only good for in-person purchases at Airfest and orders placed directly through Bdale's web store.

    Altus Metrum products are available directly from Bdale's web store, and from these distributors:

    All Altus Metrum products are completely open hardware and open source. The hardware design details and all source code are openly available for download, and advanced users are invited to join our developer community and help to enhance and extend the system. You can learn more about Altus Metrum products at

    August 29, 2014 03:12 AM

    August 27, 2014

    Andrew Zonenberg, Silicon Exposed

    Updates and pending projects

    It's been a while since I've written anything here so here's a bit of a brain-dump on upcoming stuff that will find its way here eventually.

    Thesis stuff

    This has been eating the bulk of my time lately. I just submitted a paper to ACM Computing Surveys and am working on a conference paper for EDSC that's due in two weeks or so. With any luck the thesis itself will be finished by May and I can graduate.

    Lab improvements

    I'm in the process of fixing up my lab to solve a bunch of the annoying things that have been bugging me. Most/all of these will be expanded into a full post once it's closer to completion.
    • Racking the FPGA cluster
      The "raised floor" FPGA cluster was a nice idea but the 2D structure doesn't scale. I've filled almost all of it and I really need the desk space for other things.

      I ordered a 3U Eurocard subrack from Digikey and once it arrives will be making laser-cut plastic shims to load all of my small boards into it. The first card made for the subrack is already inbound: a 3U x 4HP 10-port USB hub to replace several of the 4-port hubs I'm using now. It will be hosted by my Beaglebone Black, which will function as a front-end node bridging the USB-UART and USB-JTAG ports out to Ethernet.

      The AC701 board is huge (well over 3U on the shortest dimension) so I may end up moving it into one of the two empty 1U Sun "pizza box" server cases I have lying around. If this happens the Atlys boards may accompany it since they won't fit comfortably in 3U either.
    • Ethernet - JTAG card
      FTDI-based JTAG is simple and easy but the chips are pricey and to run in a networked environment you need a host PC/server. I'm in the early stages of designing an XC6SLX45 based board with a gigabit Ethernet port, IPv6 TCP offload engine, and 16 buffered, level-shifted JTAG ports. It will speak the libjtaghal jtagd protocol directly, without needing a CPU or operating system, for ultra-low latency and near zero jitter.
    • Logo
      I've gone long enough without having a nice logo to put on my boards, enclosures, etc. At some point I should come up with one...

    Test equipment

    I've gradually grown fed up with current test equipment. Why would I want to fiddle with knobs and squint at a tiny 320x240 LCD when I could view the signal on my 7040x1080 quad-screen setup or, better yet, the triple 4K displays I'm going to buy when prices come down a bit? Why waste bench space on dials and buttons when I could just minimize or close the control application when it's not in use? As someone who spends most of his time sitting in front of a computer I'd much prefer a "glass cockpit" lab with few physical buttons.

    I'm now planning to make a suite of test equipment based on the Unix philosophy: do one thing and do it well. Each board will be a 3U Eurocard with a power input on the back and Ethernet + probe/signal connections on the front. They will implement the low-level signal capture/generation, buffering, and trigger logic but then leave all of the analysis and configuration UI to a PC-based application, connected over 1- or 10-gigabit Ethernet depending on the tool. Projects are listed in the approximate order that I plan to build them.
    • 4-channel TDR for testing cat5e cable installs
      This design will be based on the same general concept as a SAR ADC, with the sampling matrix transposed. Instead of gradually refining one sample before proceeding to the next, the entire waveform will be sampled once, then gradually refined over time.

      Each channel of the TDR will consist of a high-speed 100-ohm differential output from a Spartan-6 FPGA to generate a pulse with very fast rise time, AC coupled into one pair of a standard RJ45 jack which will plug into the cable under test.

      On the input stage, the differential signals will be subtracted by an opamp, then the single-ended differential voltage compared against a reference voltage produced by a DAC using a LMH7324SQ or similar ultra-fast comparator. The comparator will have LVDS outputs driving a differential input on the Spartan-6, which can sample DDR LVDS at up to 1 GHz. This will produce a single horizontal slice across a plot of impedance mismatch/reflection intensity vs time/distance.

      By sending multiple pulses in sequence with successively increasing reference voltages from the DAC, it should be possible to reconstruct an entire TDR trace to at least 8 bits of precision for a fraction of the cost of even a single 1 GSa/s ADC.

      Given the 5ns/m nominal propagation delay of cat5 cable (10us/m after round trip delay), the theoretical spatial resolution limit is 10cm although I expect noise and sampling issues to reduce usable positioning accuracy down to 20-50, and the TDR will need to be calibrated with a known length of cable from the same lot if exact propagation delays are needed to compute the precise location of a fault.
    • 10-channel DC power supply

      Offshoot of the PDU. Ten-channel buck converter stepping 24 VDC down to an adjustable output voltage, operating frequency around 1.5 MHz. Digital feedback loop with support for soft-start, state machine based current limiting and overcurrent shutdown, etc.

      More details TBD once I have time to flesh out the concept a bit.
    • Gigabit Ethernet protocol analyzer
      Spartan-6 connected to three 1gbaseT PHYs. Packets coming in port A are sent out port B unchanged, and vice versa. All traffic going either way is buffered in some kind of RAM, then encapsulated inside a TCP stream and sent out port C to an analysis computer which can record stats, write a pcap, etc.

      The capture will be raw layer-1 and include the preamble, FCS, metadata describing link state changes and autonegotiation status, and cycle-accurate timestamps. Error injection may be implemented eventually if needed.

    • 128-channel logic analyzer
      This will be based on RED TIN, my existing FPGA-based ILA, but with more features and an external 4GB DDR3 SODIMM for buffering packet data. A 64-bit data bus at 1066 MT/s should be more than capable of pushing 32 channels at 1 GHz, 64 at 500 MHz, or 128 at 250 MHz. The input standards planned to be supported are LVCMOS from 1.5 to 3.3V, LVDS, SSTL, and possibly 5V LVTTL if the input buffer has sufficient range. I haven't looked into CML yet but may add this as well.

      The FPGA board will connect to the host PC via a 10gbit Ethernet link using SFP+ direct attach cabling. Dumping 4GB (32 Gb) of data over 10gbe should take somewhere around 4 seconds after protocol overhead, or less if the capture depth is set to less than the maximum.

      The FPGA board will connect via matched-impedance 100-ohm parallel cables (perhaps something like DigiKey 670-2626-ND)) to eight active probe cards. Each probe card will have a MICTOR or similar connector to the DUT providing numerous grounds, optional SSTL Vref, 16 digital inputs, and two clock/strobe inputs with optional complement inputs for differential operation. An internal DAC will allow generation of a threshold voltage for single-ended LVCMOS inputs.

      The probe card input stage will consist of the following for each channel:
      • Unity-gain buffer to reduce capacitive load on the DUT
      • Low-speed precision analog mux to select external Vref (for SSTL) or internal Vref (for LVCMOS). This threshold voltage may be shared across several/all channels in the probe card, TBD.
      • High-speed LVDS-output comparator to compare single-ended inputs against the muxed Vref.
      • 2:1 LVDS mux for selecting single-ended or differential inputs. Input A is the LVDS output from the comparator, input B is the buffered differential input from this and the adjacent channel. To reduce bit-to-bit skew all channels will have this mux even though it's redundant for odd-numbered channels.
      The end result will be 16 LVDS data bits and 2 LVDS clock bits, fed over 18 differential pairs to the FPGA board. The remaining lines in the ribbon will be used for shielding grounds, analog power, and an I2C bus to control the DAC and drive an I/O expander for controlling the mux selectors.
    LA input stage for two single-ended or one differential channel
    • 4-channel DSO
      This will use the same FPGA + DDR3 + 10gbe back end as the LA, but with the digital input stage replaced by an AFE and two of TI's 1.5 GSa/s dual ADCs with interleaving support.

      This will give me either two channels at 3 GSa/s with a target bandwidth of 500 MHz, or four channels at 1.5 GSa/s with a target bandwidth of 250 MHz. The resulting raw data rate will be 3 GSa/s * 8 bits * 2 channels or 48 Gbps, and should comfortably fit within the capacity of a 64-bit DDR3 1066 interface.

      I have no more details at this point as my mixed-signal-fu is not yet to the point that I can design a suitable AFE. This will be the last project on the list to be done due to both the cost of components and the difficulty.

    by Andrew Zonenberg ( at August 27, 2014 12:46 AM

    August 24, 2014


    Atmel AT90USB162 : weekend die-shot

    Atmel AT90USB162 is an 8-bit microcontroller with hardware USB, 16KiB flash and 512B of SRAM/EEPROM.

    August 24, 2014 09:41 PM

    August 22, 2014


    A bit of advertising

    This year a book “GPS, GLONASS, Galileo, and BeiDou for Mobile Devices: From Instant to Precise Positioning” by author Dr Ivan G. Petrovski was published. It contains link of my article. More details about book are available through the link


    August 22, 2014 10:13 AM