copyleft hardware planet

May 02, 2016

ZeptoBARS

Maxim ds2401z - serial number chip : weekend die-shot

Dallas Semiconductor/Maxim DS2401 is a factory pre-programmed silicon serial number chip.
Right in the center of the die you can see 64-bit laser-trimmed ROM. Die size 1346x686 µm.


May 02, 2016 11:00 PM

May 01, 2016

Harald Welte

Developers wanted for Osmocom GSM related work

Right now I'm feeling sad. I really shouldn't, but I still do.

Many years ago I started OpenBSC and Osmocom in order to bring Free Software into an area where it barely existed before: Cellular Infrastructure. For the first few years, it was "just for fun", without any professional users. A FOSS project by enthusiasts. Then we got some commercial / professional users, and with them funding, paying for e.g. Holger and my freelance work. Still, implementing all protocol stacks, interfaces and functional elements of GSM and GPRS from the radio network to the core network is something that large corporations typically spend hundreds of man-years on. So funding for Osmocom GSM implementations was always short, and we always tried to make the best out of it.

After Holger and I started sysmocom in 2011, we had a chance to use funds from BTS sales to hire more developers, and we were growing our team of developers. We finally could pay some developers other than ourselves from working on Free Software cellular network infrastructure.

In 2014 and 2015, sysmocom got side-tracked with some projects where Osmocom and the cellular network was only one small part of a much larger scope. In Q4/2015 and in 2016, we are back on track with focussing 100% at Osmocom projects, which you can probably see by a lot more associated commits to the respective project repositories.

By now, we are in the lucky situation that the work we've done in the Osmocom project on providing Free Software implementations of cellular technologies like GSM, GPRS, EDGE and now also UMTS is receiving a lot of attention. This attention translates into companies approaching us (particularly at sysmocom) regarding funding for implementing new features, fixing existing bugs and short-comings, etc. As part of that, we can even work on much needed infrastructural changes in the software.

So now we are in the opposite situation: There's a lot of interest in funding Osmocom work, but there are few people in the Osmocom community interested and/or capable to follow-up to that. Some of the early contributors have moved into other areas, and are now working on proprietary cellular stacks at large multi-national corporations. Some others think of GSM as a fun hobby and want to keep it that way.

At sysmocom, we are trying hard to do what we can to keep up with the demand. We've been looking to add people to our staff, but right now we are struggling only to compensate for the regular fluctuation of employees (i.e. keep the team size as is), let alone actually adding new members to our team to help to move free software cellular networks ahead.

I am struggling to understand why that is. I think Free Software in cellular communications is one of the most interesting and challenging frontiers for Free Software to work on. And there are many FOSS developers who love nothing more than to conquer new areas of technology.

At sysmocom, we can now offer what would have been my personal dream job for many years:

  • paid work on Free Software that is available to the general public, rather than something only of value to the employer
  • interesting technical challenges in an area of technology where you will not find the answer to all your problems on stackoverflow or the like
  • work in a small company consisting almost entirely only of die-hard engineers, without corporate managers, marketing departments, etc.
  • work in an environment free of Microsoft and Apple software or cloud services; use exclusively Free Software to get your work done

I would hope that more developers would appreciate such an environment. If you're interested in helping FOSS cellular networks ahead, feel free to have a look at http://sysmocom.de/jobs or contact us at jobs@sysmocom.de. Together, we can try to move Free Software for mobile communications to the next level!

by Harald Welte at May 01, 2016 10:00 PM

April 30, 2016

Bunnie Studios

Circuit Classics — Sneak Peek!

My first book on electronics was Getting Started with Electronics; to this day, I still imagine electrons as oval-shaped particles with happy faces because of its illustrations. So naturally, I was thrilled to find that the book’s author, Forrest Mims III, and my good friend Star Simpson joined forces to sell kit versions of classic circuits straight off the pages of Getting Started with Electronics. This re-interpretation of a classic as an interactive kit is perfect for today’s STEM curriculum, and I hope it will inspire another generation of engineers and hackers.

I’m very lucky that Star sent me a couple early prototypes to play with. Today was a rainy Saturday afternoon, so I loaded a few tracks from Information Society’s Greatest Hits album (I am most definitely a child of the 80’s) and fired up my soldering iron for a walk down memory lane. I remembered how my dad taught me to bend the leads of resistors with pliers, to get that nice square look. I remembered how I learned to use masking tape and bent leads to hold parts in place, so I could flip the board over for soldering. I remembered doodling circuits on scraps of paper after school while watching Scooby-Doo cartoons on a massive CRT TV that took several minutes to warm up. Things were so much simpler back then …

I couldn’t help but embellish a little bit. I added a socket for the chip on my Bargraph Voltage Indicator (when I see chips in sockets, I hear a little voice in my head whispering “hack me!” “fix me!” “reuse me!”), and swapped out the red LEDs for some high-efficiency white LEDs I happened to have on the shelf.

I appreciated Star’s use of elongated pads on the DIP components, a feature not necessary for automated assembly but of great assistance to hand soldering.

It works! Here I am testing the bargraph voltage indicator with a 3V coin cell on my (very messy) keyboard desk.

Voilà! My rendition of a circuit classic. I think the photo looks kind of neat in inverse color.

I really appreciate seeing a schematic printed on a circuit board next to its circuit. It reminds me that before Open Hardware, hardware was open. Schematics like these taught me that circuits were knowable; unlike the mysteries of quantum physics and molecular biology, virtually every circuit is a product of human imagination. That another engineer designed it, means any other engineer could understand it, given sufficient documentation. As a youth, I didn’t understand what these symbols and squiggles meant; but just knowing that a map existed set me on a path toward greater comprehension.

Whether a walk down nostalgia lane or just getting started in electronics, Circuit Classics are a perfect activity for both young and old. If you want to learn more, check out Star Simpson’s crowdfunding campaign on Crowd Supply!

by bunnie at April 30, 2016 04:19 PM

Hacking Humble Bundle

I’m very honored and proud to have one of my books offered as part of the Hacking Humble Bundle. Presented by No Starch Press, the Hacking Humble Bundle is offering several eBook titles for a “pay-what-you-feel” price, including my “Hacking the Xbox”, along with “Automate the Boring Stuff with Python”, “The Linux Command Line” and “The Smart Girl’s Guide to Privacy”. Of course, you can already download Hacking the Xbox for free, but if you opt to pay at least $15 you can get 9 more fantastic titles — check out all of them at the Humble Bundle page.

One of the best parts about a humble bundle is you have a say in where your money goes.

If you click on “Choose where your money goes” near checkout area, you’re presented with a set of sliders that let you pick how much money goes to charity, how much to the publisher, and how much as a tip to the Humble Bundle. For the Hacking Humble Bundle, the default charity is the EFF (you’re free to pick others if you want). For the record, I don’t get any proceeds from the Humble Bundle; I’m in it to support the EFF and No Starch.

If you enjoyed Hacking the Xbox, this is a perfect opportunity to give back to a charitable organization that was instrumental in making it happen. Without the EFF’s counsel, I wouldn’t have known my rights. Knowledge is power, and their support gave me the courage I needed to stand up and assert my right to hack, despite imposing adversaries. To this day, the EFF continues to fight for our rights on the digital frontier, and we need their help more than ever. No Starch has also been a stalwart supporter of hackers; their founder, Bill Pollock, and his “Damn the Torpedoes, Full Speed Ahead” attitude toward publishing potentially controversial topics has enabled hackers to educate the world about relevant but edgy technical topics.

If hacking interests you, it’s probably worth the time to check out the Hacking Humble Bundle and give a thought about what it’s worth to you. After all, you can “pay what you feel” and still get eBooks in return.

by bunnie at April 30, 2016 03:49 PM

April 26, 2016

Free Electrons

How we found that the Linux nios2 memset() implementation had a bug!

NIOS II processorNiosII is a 32-bit RISC embedded processor architecture designed by Altera, for its family of FPGAs: Cyclone III, Cyclone IV, etc. Being a soft-core architecture, by using Altera’s Quartus Prime design software, you can adjust the CPU configuration to your needs and instantiate it into the FPGA. You can customize various parameters like the instruction or the data cache size, enable/disable the MMU, enable/disable an FPU, and so on. And for us embedded Linux engineers, a very interesting aspect is that both the Linux kernel and the U-Boot bootloader, in their official versions, support the NIOS II architecture.

Recently, one of our customers designed a custom NIOS II platform, and we are working on porting the mainline U-Boot bootloader and the mainline Linux kernel to this platform. The U-Boot porting went fine, and quickly allowed us to load and start a Linux kernel. However, the Linux kernel was crashing very early with:

[    0.000000] Linux version 4.5.0-00007-g1717be9-dirty (rperier@archy) (gcc version 4.9.2 (Altera 15.1 Build 185) ) #74 PREEMPT Fri Apr 22 17:43:22 CEST 2016
[    0.000000] bootconsole [early0] enabled
[    0.000000] early_console initialized at 0xe3080000
[    0.000000] BUG: failure at mm/bootmem.c:307/__free()!
[    0.000000] Kernel panic - not syncing: BUG!

This BUG() comes from the __free() function in mm/bootmem.c. The bootmem allocator is a simple page-based allocator used very early in the Linux kernel initialization for the very first allocations, even before the regular buddy page allocator and other allocators such as kmalloc are available. We were slightly surprised to hit a BUG in a generic part of the kernel, and immediately suspected some platform-specific issue, like an invalid load address for our kernel, or invalid link address, or other ideas like this. But we quickly came to the conclusion that everything was looking good on that side, and so we went on to actually understand what this BUG was all about.

The NIOS II memory initialization code in arch/nios2/kernel/setup.c does the following:

bootmap_size = init_bootmem_node(NODE_DATA(0),
                                 min_low_pfn, PFN_DOWN(PHYS_OFFSET),
                                 max_low_pfn);
[...]
free_bootmem(memory_start, memory_end - memory_start);

The first call init_bootmem_node() initializes the bootmem allocator, which primarily consists in allocating a bitmap, with one bit per page. The entire bootmem bitmap is set to 0xff via a memset() during this initialization:

static unsigned long __init init_bootmem_core(bootmem_data_t *bdata,
        unsigned long mapstart, unsigned long start, unsigned long end)
{
        [...]
        mapsize = bootmap_bytes(end - start);
        memset(bdata->node_bootmem_map, 0xff, mapsize);
        [...]
}

After doing the bootmem initialization, the NIOS II architecture code calls free_bootmem() to mark all the memory pages as available, except the ones that contain the kernel itself. To achieve this, the __free() function (which is the one triggering the BUG) clears the bits corresponding to the page to be marked as free. When clearing those bits, the function checks that the bit was previously set, and if it’s not the case, fires the BUG:

static void __init __free(bootmem_data_t *bdata,
                        unsigned long sidx, unsigned long eidx)
{
        [...]
        for (idx = sidx; idx  eidx; idx++)
                if (!test_and_clear_bit(idx, bdata->node_bootmem_map))
                        BUG();
}

So to summarize, we were in a situation where a bitmap is memset to 0xff, but almost immediately afterwards, a function that clears some bits finds that some of the bits are already cleared. Sounds odd, doesn’t it?

We started by double checking that the address of the bitmap was the same between the initialization function and the __free function, verifying that the code was not overwriting the bitmap, and other obvious issues. But everything looked alright. So we simply dumped the bitmap after it was initialized by memset to 0xff, and to our great surprise, we found that the bitmap was in fact initialized with the pattern 0xff00ff00 and not 0xffffffff. This obviously explained why we were hitting this BUG(): simply because the buffer was not properly initialized. At first, we really couldn’t believe this: how it is possible that something as essential as memset() in Linux was not doing its job properly?

On the NIOS II platform, memset() has an architecture-specific implementation, available in arch/nios2/lib/memset.c. For buffers smaller than 8 bytes, this memset implementation uses a simple naive loop, iterating byte by byte. For larger buffers, it uses a more optimized implementation, using inline assembly. This implementation copies data per blocks of 4-bytes rather than 1 byte to speed-up the memset.

We quickly tested a workaround that consisted in using the naive implementation for all buffer sizes, and it solved the problem: we had a booting kernel, all the way to the point where it mounts a root filesystem! So clearly, it’s the optimized implementation in assembly that had a bug.

After some investigation, we found out that the bug was in the very first instructions of the assembly code. The following piece of assembly is supposed to create a 4-byte value that repeats 4 times the 1-byte pattern passed as an argument to memset:

/* fill8 %3, %5 (c & 0xff) */
"       slli    %4, %5, 8\n"
"       or      %4, %4, %5\n"
"       slli    %3, %4, 16\n"
"       or      %3, %3, %4\n"

This code takes as input in %5 the one-byte pattern, and is supposed to return in %3 the 4-byte pattern. It goes through the following logic:

  • Stores in %4 the initial pattern shifted left by 8 bits. Provided an initial pattern of 0xff, %4 should now contain 0xff00
  • Does a logical or between %4 and %5, which leads to %4 containing 0xffff
  • Stores in %3 the 2-byte pattern shifted left by 16 bits. %3 should now contain 0xffff0000.
  • Does a logical or between code>%3
and %4, i.e between 0xffff0000 and 0xffff, which gives the expected 4-byte pattern 0xffffffff

When you look at the source code, it looks perfectly fine, so our source code review didn’t spot the problem. However, when looking at the actual compiled code disassembled, we got:

34:	280a923a 	slli	r5,r5,8
38:	294ab03a 	or	r5,r5,r5
3c:	2808943a 	slli	r4,r5,16
40:	2148b03a 	or	r4,r4,r5

Here r5 gets used for both %4 and %5. Due to this, the final pattern stored in r4 is 0xff00ff00 instead of the expected 0xffffffff.

Now, if we take a look at the output operands, %4 is defined with the "=r" constraint, i.e an output operand. How to prevent the compiler from re-using the corresponding register for another operand? As explained in this document, "=r" does not prevent gcc from using the same register for an output operand (%4) and input operand (%5). By adding the constrainst & (in addition to "=r"), we tell the compiler that the register associated with the given operand is an output-only register, and so, cannot be used with an input operand.

With this change, we get the following assembly output:

34:	2810923a 	slli	r8,r5,8
38:	4150b03a 	or	r8,r8,r5
3c:	400e943a 	slli	r7,r8,16
40:	3a0eb03a 	or	r7,r7,r8

Which is much better, and correctly produces the 0xffffffff pattern when 0xff is provided as the initial 1-byte pattern to memset.

In the end, the final patch only adds one character to adjust the inline assembly constraint and gets the proper behavior from gcc:

diff --git a/arch/nios2/lib/memset.c b/arch/nios2/lib/memset.c
index c2cfcb1..2fcefe7 100644
--- a/arch/nios2/lib/memset.c
+++ b/arch/nios2/lib/memset.c
@@ -68,7 +68,7 @@ void *memset(void *s, int c, size_t count)
 		  "=r" (charcnt),	/* %1  Output */
 		  "=r" (dwordcnt),	/* %2  Output */
 		  "=r" (fill8reg),	/* %3  Output */
-		  "=r" (wrkrega)	/* %4  Output */
+		  "=&r" (wrkrega)	/* %4  Output only */
 		: "r" (c),		/* %5  Input */
 		  "0" (s),		/* %0  Input/Output */
 		  "1" (count)		/* %1  Input/Output */

This patch was sent upstream to the NIOS II kernel maintainers:
[PATCH v2] nios2: memset: use the right constraint modifier for the %4 output operand, and has already been applied by the NIOS II maintainer.

We were quite surprised to find a bug in some common code for the NIOS II architecture: we were assuming it would have already been tested on enough platforms and with enough compilers/situations to not have such issues. But all in all, it was a fun debugging experience!

It is worth mentioning that in addition to this bug, we found another bug affecting NIOS II platforms, in the asm-generic implementation of the futex_atomic_cmpxchg_inatomic() function, which was causing some preemption imbalance warnings during the futex subsystem initialization. We also sent a patch for this problem, which has also been applied already.

by Romain Perier at April 26, 2016 03:03 PM

April 22, 2016

Elphel

Tutorial 01: Access to Elphel camera documentation from 3D model

We have created a short video tutorial to help our users navigate through 3D models of Elphel cameras. Cameras can be virtually taken apart and put back together which helps to understand the camera configuration and access information about every camera component. Please feel free to comment on the video quality and usefulness, as we are launching a series of tutorials about cameras, software modifications, FPGA development on 10393 camera board, etc. and we would like to receive feedback on them.

Description:

In this video we will show how the 3D model of Elphel NC393 camera can be used to view the camera, understand the components it is made of, take it apart and put back together, and get access to each part’s documentation.

The camera model is made using X3Dom technology autogenerated from STEP files used for production.

In your browser you can open the link to one of the camera assemblies from Elphel wiki page:

The buttons on the right list all camera components.

You can click on one of the buttons and the component will be selected on the model. Click again and the part will be selected without the rest of the model.
From here, using the buttons at the bottom of the screen you can open the part in a new window.
Or look for the part on Elphel wiki;
Or hide the part and see the rest of the model;
Eventually you can return to the whole model by clicking on the part button once more, or there is always a reset model button, at the top left corner.

You can also select part by clicking on the part on the model.

To deselect it click again;

Right click removes the part, so you can get access to the insides of the camera.

Once you have selected the part you can look for more information about it on Elphel wiki.

For the selected board you can type the board name in the wiki search and get access to the description about the board, circuit diagram, parts list and PCB layout.

All Elphel software is Free Software and distributed under GNU/GPL license as well as Elphel camera designs are open hardware, distributed under CERN open Hardware license.

by olga at April 22, 2016 02:08 AM

April 21, 2016

Free Electrons

Article on the CHIP in French Linux magazine

Free Electrons engineer and Allwinner platform maintainer Maxime Ripard has written a long article presenting the Nextthing C.H.I.P platform in issue #18 of French magazine OpenSilicium, dedicated to open source in embedded systems. The C.H.I.P has even been used for the front cover of the magazine!

OpenSilicium #18

In this article, Maxime presents the C.H.I.P platform, its history and the choice of the Allwinner SoC. He then details how to set up a developer-friendly environment to use the board, building and flashing from scratch U-Boot, the kernel and a Debian-based root filesystem. Finally, he describes how to use Device Tree overlays to describe additional peripherals connected to the board, with the traditional example of the LED.

OpenSilicium #18 CHIP article

In the same issue, OpenSilicium also covers numerous other topics:

  • A feedback on the FOSDEM 2016 conference
  • Uploading code to STM32 microcontrollers: the case of STM32-F401RE
  • Kernel and userspace debugging with ftrace
  • IoT prototyping with Buildroot
  • RIOT, the free operating system for the IoT world
  • Interview of Cedric Bail, working on the Enligthenment Foundation Libraries for Samsung
  • Setup of Xenomai on the Zynq Zedboard
  • Decompression of 3R data stream using a VHDL-described circuit
  • Write a userspace device driver for a FPGA using UIO

by Thomas Petazzoni at April 21, 2016 08:56 PM

Free Electrons contributions to Linux 4.5

Adelie PenguinLinus Torvalds just released Linux 4.5, for which the major new features have been described by LWN.net in three articles: part 1, part 2 and part 3. On a total of 12080 commits, Free Electrons contributed 121 patches, almost exactly 1% of the total. Due to its large number of contribution by patch number, Free Electrons engineer Boris Brezillon appears in the statistics of top-contributors for the 4.5 kernel in the LWN.net statistics article.

This time around, our important contributions were:

  • Addition of a driver for the Microcrystal rv1805 RTC, by Alexandre Belloni.
  • A huge number of patches touching all NAND controller drivers and the MTD subsystem, from Boris Brezillon. They are the first step of a more general rework of how NAND controllers and NAND chips are handled in the Linux kernel. As Boris explains in the cover letter, his series aims at clarifying the relationship between the mtd and nand_chip structures and hiding NAND framework internals to NAND. […]. This allows removal of some of the boilerplate code done in all NAND controller drivers, but most importantly, it unifies a bit the way NAND chip structures are instantiated.
  • On the support for the Marvell ARM processors:
    • In the mvneta networking driver (used on Armada 370, XP, 38x and soon on Armada 3700): addition of naive RSS support with per-CPU queues, configure XPS support, numerous fixes for potential race conditions.
    • Fix in the Marvell CESA driver
    • Misc improvements to the mv_xor driver for the Marvell XOR engines.
    • After four years of development the 32-bits Marvell EBU platform support is now pretty mature and the majority of patches for this platform now are improvements of existing drivers or bug fixes rather than new hardware support. Of course, the support for the 64-bits Marvell EBU platform has just started, and will require a significant number of patches and contributions to be fully supported upstream, which is an on-going effort.
  • On the support for the Atmel ARM processors:
    • Addition of the support for the L+G VInCo platform.
    • Improvement to the macb network driver to reset the PHY using a GPIO.
    • Fix Ethernet PHY issues on Atmel SAMA5D4
  • On the support for Allwinner ARM processors:
    • Implement audio capture in the sun4i audio driver.
    • Add the support for a special pin controller available on Allwinner A80.

The complete list of our contributions:

by Thomas Petazzoni at April 21, 2016 01:48 PM

April 20, 2016

Free Electrons

Slides from the Embedded Linux Conference

Two weeks ago, the entire Free Electrons engineering team (9 persons) attended the Embedded Linux Conference in San Diego. We had some really good time there, with lots of interesting talks and useful meetings and discussions.

Tim Bird opening the conferenceDiscussion between Linus Torvalds and Dirk Hohndel

In addition to attending the event, we also participated by giving 5 different talks on various topics, for which we are publishing the slides:

Boris Brezillon, the new NAND Linux subsystem maintainer, presented on Modernizing the NAND framework: The big picture.

Boris Brezillon's talk on the NAND subsystem

Antoine Ténart presented on Using DT overlays to support the C.H.I.P’s capes.

Antoine Tenart's talk on using DT overlays for the CHIP

Maxime Ripard, maintainer of the Allwinner platform support in Linux, presented on Bringing display and 3D to the C.H.I.P computer.

Maxime Ripard's talk on display and 3D for the CHIP

Alexandre Belloni and Thomas Petazzoni presented Buildroot vs. OpenEmbedded/Yocto Project: a four hands discussion.

Belloni and Petazzoni's talk on OpenEmbedded vs. Buildroot

Thomas Petazzoni presented GNU Autotools: a tutorial.

Petazzoni's tutorial on the autotools

All the other slides from the conference are available from the event page as well as from eLinux.org Wiki. All conferences have been recorded, and the videos will hopefully be posted soon by the Linux Foundation.

by Thomas Petazzoni at April 20, 2016 09:17 AM

April 19, 2016

Free Electrons

Free Electrons engineer Boris Brezillon becomes Linux NAND subsystem maintainer

Free Electrons engineer Boris Brezillon has been involved in the support for NAND flashes in the Linux kernel for quite some time. He is the author of the NAND driver for the Allwinner ARM processors, did several improvements to the NAND GPMI controller driver, has initiated a significant rework of the NAND subsystem, and is working on supporting MLC NANDs. Boris is also very active on the linux-mtd mailing list by reviewing patches from others, and making suggestions.

Hynix NAND flash

For those reasons, Boris was recently appointed by the MTD maintainer Brian Norris as a new maintainer of the NAND subsystem. NAND is considered a sub-subsystem of the MTD subsystem, and as such, Boris will be sending pull requests to Brian, who in turn is sending pull requests to Linus Torvalds. See this commit for the addition of Boris as a NAND maintainer in the MAINTAINERS file. Boris will therefore be in charge of reviewing and merging all the patches touching drivers/mtd/nand/, which consist mainly of NAND drivers. Boris has created a nand/next on Github, where he has already merged a number of patches that will be pushed to Brian Norris during the 4.7 merge window.

We are happy to see one of our engineers taking another position as a maintainer in the kernel community. Maxime Ripard was already a co-maintainer of the Allwinner ARM platform support, Alexandre Belloni a co-maintainer of the RTC subsystem and Atmel ARM platform support, Grégory Clement a co-maintainer of the Marvell EBU platform support, and Antoine Ténart a co-maintainer of the Annapurna Labs platform support.

by Thomas Petazzoni at April 19, 2016 07:59 AM

April 16, 2016

ZeptoBARS

NXP/Philips BC857BS - dual pnp BJT : weekend die-shot

SOT-363 package contains 2 separate identical transistor dies.
Size of each die is 285x259 µm.


April 16, 2016 02:35 AM

April 12, 2016

Free Electrons

Slides from Collaboration Summit talk on Linux kernel upstreaming

As we announced in a previous blog post, Free Electrons CTO Thomas Petazzoni gave a talk at the Collaboration Summit 2016 covering the topic of “Upstreaming hardware support in the Linux kernel: why and how?“.

The slides of the talk are now available in PDF format.

Upstreaming hardware support in the Linux kernel: why and how?

Upstreaming hardware support in the Linux kernel: why and how?

Upstreaming hardware support in the Linux kernel: why and how?

Through this talk, we identified a number of major reasons that should encourage hardware vendors to contribute the support for their hardware to the upstream Linux kernel, and some hints on how to achieve that. Of course, within a 25 minutes time slot, it was not possible to get into the details, but hopefully the general hints we have shared, based on our significant Linux kernel upstreaming experience, have been useful for the audience.

Unfortunately, none of the talks at the Collaboration Summit were recorded, so no video will be available for this talk.

by Thomas Petazzoni at April 12, 2016 11:35 AM

April 10, 2016

Free Electrons

“Porting Linux on ARM” seminar road show in France

CaptronicIn December 2015, Free Electrons engineer Alexandre Belloni gave a half-day seminar “Porting Linux on ARM” in Toulouse (France) in partnership with french organization Captronic. We published the materials used for the seminar shortly after the event.

We are happy to announce that this seminar will be given in four different cities in France over the next few months:

  • In Montpellier, on April 14th from 2 PM to 6 PM. See this page for details.
  • In Clermont-Ferrand, on April 27th from 2 PM to 6 PM. See this page for details.
  • In Brive, on April 28th from 9 AM to 1 PM. See this page for details.
  • Near Chambéry, on May 25th from 9:30 AM to 5/30 PM. See this page for details.
  • Near Bordeaux, on June 2nd from 2 PM to 6 PM. See this page for details.
  • Near Nancy, on June 16th from 2 PM to 6 PM. See this page for details.

The seminar is delivered in French, and the event is free after registration. The speaker, Alexandre Belloni, has worked on porting botloaders and the Linux kernel on a number of ARM platforms (Atmel, Freescale, Texas Instruments and more) and is the Linux kernel co-maintainer for the RTC subsystem and the support of the Atmel ARM processors.

by Thomas Petazzoni at April 10, 2016 08:56 PM

April 09, 2016

Bunnie Studios

Name that Ware, April 2016

The Ware for April 2016 is shown below.

The ware this month is courtesy Philipp Gühring. I think it should be a bit more challenging that the past couple months’ wares. If readers are struggling to guess this one by the end of this month, I’ve got a couple other photos Philipp sent which should give additional clues.

But, interested to see what people think this is, with just this photo!

by bunnie at April 09, 2016 11:22 AM

April 03, 2016

ZeptoBARS

ST TS971 : weekend die-shot

ST TS321 is a single 12 MHz R2R opamp in SOT23-5 package with low noise and low distortion.
Die size 1079x799 µm.


April 03, 2016 08:40 AM

March 30, 2016

Elphel

Synchronizing Verilog, Python and C

Elphel NC393 as all the previous camera models relies on the intimate cooperation of the FPGA programmed in Verilog HDL and the software that runs on a general purpose CPU. Just as the FPGA manufacturers increase the speed and density of their devices, so do the Elphel cameras. FPGA code consists of the hundreds of files, tens of thousand lines of code and is constantly modified during the lifetime of the product both by us and by our users to accommodate the cameras for their applications. In most cases, if it is not just a bug fix or minor improvement of the previously implemented functionality, the software (and multiple layers of it) needs to be aware of the changes. This is both the power and the challenge of such hybrid systems, and the synchronization of the changes is an important issue.

Verilog parameters

Verilog code of the camera consists of the parameterized modules, we try to use parameters and

generate
Verilog operators in most cases, but
`define
macros and
`ifdef
conditional directives are still used to switch some global options (like synthesis vs. compilation, various debug levels). Eclipse-based VDT that we use for the FPGA development is aware of the parameters, and when the code instantiates a parametrized module that has parameter-dependent widths of the ports, VDT verifies that the instance ports match the signals connected to them, and warns the developer if it is not the case. Many parameters are routed through the levels of the hierarchy so the deeper instances can be controlled from a single header file, making it obvious which parameters influence which modules operations. Some parameters are specified directly, while some have to be calculated – it is the case for the register address decoders of the same module instances for different channels. Such channels have the same relative address maps, but different base addresses. Most of the camera parameters (not counting the trivial ones where the module instance parameters are defined by the nature of the code) are contained in a single x393_parameters.vh header file. There are more than six hundred of them there and most influence the software API.

Development cycle

When implementing some new camera FPGA functionality, we start with the simulation – always. Sometimes very small changes can be applied to the code, synthesized and tested in the actual hardware, but it almost never works this way – bypassing the simulation step. So far all the simulation we use consit of the plain old Verilog test benches (such as this or that) – not even System Verilog. Most likely for simulating CPU+FPGA devices ideal would be the use the software programming language to model the CPU side of the SoC and keep Verilog (or VHDL who prefers it) to the FPGA. Something like cocotb may work, especially we are already manually translating Verilog into Python, but we are not there yet.

Translaing Verilog to Python

So the next step is as I just mentioned – manual translation of the Verilog tasks and functions used in simulation to Python that code that can run on the actual hardware. The result does not look extremely pythonian as I try to follow already tested Verilog code, but it is OK. Not all the translation is manual – we use a import_verilog_parameters.py module to “understand” the parameters defined in Verilog files (including simple arithmetic and logical operations used to generate derivative parameters/localparams in the Verilog code), get the values from the same source and so reduce the possibility to accidentally use old software with the modified FPGA implementation. As the parameters are known to the program at a run time and PyDev (running, btw, in the same Eclipse IDE as the VDT – just as a different “perspective”) can not catch the misspelled parameter names. So the program has an option to modify itself and generate pre-defines for each of the parameter. Only the top part of the vrlg module is human-generated, everything under line 120 is automatically generated (and has to be re-generated only after adding new parameters to the Verilog source).

Hardware testing with Python programs

When the Verilog code is manually translated (or while new parts of the code are being translated or developed from scratch) it is possible to operate the actual camera. The top module is still called test_mcntrl as it started with DDR3 memory calibration using Levenberg-Marquardt algorithm (luckily it needs to run just once – it takes camera 10 minutes to do the full calibration this way).

This program keeps track of the Verilog parameters and macros, exposes all the functions (with the names not beginning with the underscore character), extracts docstrings from the code and combines it with the generated list of the function parameters and their default values, provides search/help for the functions with regexp (a must when there are hundreds of such functions). Next code ran in the camera:

x393 +0.043s--> help w.*_sensor_r
=== write_sensor_reg16 ===
defined in x393_sensor.X393Sensor, /usr/local/bin/x393_sensor.py: 496)
Write i2c register in immediate mode
@param num_sensor - sensor port number (0..3), or "all" - same to all sensors
@param reg_addr16 - 16-bit register address (page+low byte, for MT9P006 high byte is an 8-bit slave address = 0x90)
@param reg_data16 - 16-bit data to write to sensor register
     Usage: write_sensor_reg16 <num_sensor> <reg_addr16> <reg_data16>
x393 +0.010s-->

And the same one in PyDev console window of Eclipse IDE – “simulated” means that the program could not detect the FPGA and so it is not the target hardware:

x393(simulated) +0.121s--> help w.*_sensor_r
=== write_sensor_reg16 ===
defined in x393_sensor.X393Sensor, /home/andrey/git/x393/py393/x393_sensor.py: 496)
Write i2c register in immediate mode
@param num_sensor - sensor port number (0..3), or "all" - same to all sensors
@param reg_addr16 - 16-bit register address (page+low byte, for MT9P006 high byte is an 8-bit slave address = 0x90)
@param reg_data16 - 16-bit data to write to sensor register
     Usage: write_sensor_reg16 <num_sensor> <reg_addr16> <reg_data16>
x393(simulated) +0.001s-->

Python program was also used for the AHCI SATA controller initial development (before adding it was possible to add is as Linux kernel platform driver, but number of parameters there is much smaller, and most of the addresses are defined by the AHCI standard.

Synchronizing parameters with the kernel drivers

Next step is to update/redesign/develop the Linux kernel drivers to support camera functionality. Learning the lessons from the previous camera models (software was growing with the hardware incrementally) we are trying to minimize manual intervention into the process of synchronizing different layers of code (including the “hardware” one). Previous camera interface to the FPGA consisted of the hand-crafted files such as x353.h. It started from the x313.h (for NC313 – our first camera based on Axis CPU and Xilinx FPGA – same was used in NC323 that scanned many billions of book pages), was modified for the NC333 and later for our previous NC353 used in car-mounted panoramic cameras that captured most of the world’s roads.

Each time the files were modified to accommodate the new hardware, it was always a challenge to add extra bits to the memory controller addresses, image frame widths and heights (they are now all 16-bit wide – enough for the multi-gigapixel sensors). With Python modules already knowing all the current values of the Verilog parameters that define software interface it was natural to generate the C files needed to interface the hardware in the same environment.

Implementation of the register access in the FPGA

The memory-mapped registers in the camera share the same access mechanism – they use MAXIGP0 (CPU master, general purpose, channel 0) AXI port available in SoC, generously mapped there to 1/4 of the whole 32-bit address range (0×40000000.0x7fffffff). While logically all the locations are 32-bit wide, some use just 1 byte or even no data at all – any write to such address causes defined action.

Internally the commands are distributed to the target modules over a tree of byte-parallel buses that tolerate register insertion, at the endpoints they are converted to the parallel format by cmd_deser.v instances. The status data from the modules (sent by status_generate.v) is routed as messages (also in byte-parallel format to reduce the required FPGA routing resources) to a single block memory that can be read over the AXI by the CPU with zero delay. The status generation by the subsystems is individually programmed to be either on demand (in response to the write operation by the CPU) or automatically when the register data changes. While this write and read mechanism is common, the nature of the registers and data may be very different as the project combines many modules designed at different time for different purposes. All the memory mapped locations in the design fall into 3 categories:

  • Read only registers that allow to read status from the various modules, DMA pointers and other small data items.
  • Read/write registers – the ones where result of writing does not depend on any context. The full write register address range has a shadow memory block in parallel, so reading from that address will return the data that was last written there.
  • Write-only registers – all other registers where write action depends on the context. Some modules include large tables exposed through a pair of address/data locations in the address map, many other have independent bit fields with the corresponding “set” bit, so internal values are modified for only the selected field.

Register access as C11 anonymous members

All the registers in the design are 32-bit wide and are aligned to 4-byte ranges, even as not all of them use all the bits. Another common feature of the used register model is that some modules exist in multiple instances, each having evenly spaced base addresses, some of them have 2-level hierarchy (channel and sub-channel), where the address is a sum of the category base address, relative register address and a linear combination of the two indices.

Individual C

typedef
is generated for each set of registers that have different meanings of the bit fields – this way it is possible to benefit from the compiler type checking. All the types used fit into the 32 bits, and as in many cases the same hardware register can accept alternative values for individual bit fields, we use unions of anonymous (to make access expressions shorter) bit-field structures.

Here is a generated example of such typedef code (full source):

// I2C contol/table data

typedef union {
    struct {
          u32        tbl_addr: 8; // [ 7: 0] (0) Address/length in 64-bit words (<<3 to get byte address)
          u32                :20;
          u32        tbl_mode: 2; // [29:28] (3) Should be 3 to select table address write mode
          u32                : 2;
    }; 
    struct {
          u32             rah: 8; // [ 7: 0] (0) High byte of the i2c register address
          u32             rnw: 1; // [    8] (0) Read/not write i2c register, should be 0 here
          u32              sa: 7; // [15: 9] (0) Slave address in write mode
          u32            nbwr: 4; // [19:16] (0) Number of bytes to write (1..10)
          u32             dly: 8; // [27:20] (0) Bit delay - number of mclk periods in 1/4 of the SCL period
          u32    /*tbl_mode*/: 2; // [29:28] (2) Should be 2 to select table data write mode
          u32                : 2;
    }; 
    struct {
          u32         /*rah*/: 8; // [ 7: 0] (0) High byte of the i2c register address
          u32         /*rnw*/: 1; // [    8] (0) Read/not write i2c register, should be 1 here
          u32                : 7;
          u32            nbrd: 3; // [18:16] (0) Number of bytes to read (1..18, 0 means '8')
          u32           nabrd: 1; // [   19] (0) Number of address bytes for read (0 - one byte, 1 - two bytes)
          u32         /*dly*/: 8; // [27:20] (0) Bit delay - number of mclk periods in 1/4 of the SCL period
          u32    /*tbl_mode*/: 2; // [29:28] (2) Should be 2 to select table data write mode
          u32                : 2;
    }; 
    struct {
          u32  sda_drive_high: 1; // [    0] (0) Actively drive SDA high during second half of SCL==1 (valid with drive_ctl)
          u32     sda_release: 1; // [    1] (0) Release SDA early if next bit ==1 (valid with drive_ctl)
          u32       drive_ctl: 1; // [    2] (0) 0 - nop, 1 - set sda_release and sda_drive_high
          u32    next_fifo_rd: 1; // [    3] (0) Advance I2C read FIFO pointer
          u32                : 8;
          u32         cmd_run: 2; // [13:12] (0) Sequencer run/stop control: 0,1 - nop, 2 - stop, 3 - run 
          u32           reset: 1; // [   14] (0) Sequencer reset all FIFO (takes 16 clock pulses), also - stops i2c until run command
          u32                :13;
          u32    /*tbl_mode*/: 2; // [29:28] (0) Should be 0 to select controls
          u32                : 2;
    }; 
    struct {
          u32             d32:32; // [31: 0] (0) cast to u32
    }; 
} x393_i2c_ctltbl_t;

Some member names in the example above are commented out (like /*tbl_mode*/ in lines 398, 408 and 420). This is done so because some bit fields (in this case bits [29:28]) have the same meaning in all alternative structures, and auto-generating complex union/structure combinations to create a valid C code with each member having unique name would produce rather clumsy code. Instead this script makes sure that same named members really designate the same bit fields, and then makes them anonymous while preserving names for a human reader. The last member (u32 d32:32;) is added to each union making it possible to address each of them as an unsigned long variable without casting.

And this is a snippet of the part of the generator code that produced it:

def _enc_i2c_tbl_wmode(self):
    dw=[]
    dw.append(("rah",      vrlg.SENSI2C_TBL_RAH,    vrlg.SENSI2C_TBL_RAH_BITS, 0, "High byte of the i2c register address"))
    dw.append(("rnw",      vrlg.SENSI2C_TBL_RNWREG,                         1, 0, "Read/not write i2c register, should be 0 here"))
    dw.append(("sa",       vrlg.SENSI2C_TBL_SA,     vrlg.SENSI2C_TBL_SA_BITS,  0, "Slave address in write mode"))
    dw.append(("nbwr",     vrlg.SENSI2C_TBL_NBWR,   vrlg.SENSI2C_TBL_NBWR_BITS,0, "Number of bytes to write (1..10)"))
    dw.append(("dly",      vrlg.SENSI2C_TBL_DLY,    vrlg.SENSI2C_TBL_DLY_BITS, 0, "Bit delay - number of mclk periods in 1/4 of the SCL period"))
    dw.append(("tbl_mode", vrlg.SENSI2C_CMD_TAND,                           2, 2, "Should be 2 to select table data write mode"))
    return dw

The vrlg.* values used above are in turn read from the x393_parameters.vh Verilog file:

//i2c page table bit fields
    parameter SENSI2C_TBL_RAH =        0, // high byte of the register address
    parameter SENSI2C_TBL_RAH_BITS =   8,
    parameter SENSI2C_TBL_RNWREG =     8, // read register (when 0 - write register
    parameter SENSI2C_TBL_SA =         9, // Slave address in write mode
    parameter SENSI2C_TBL_SA_BITS =    7,
    parameter SENSI2C_TBL_NBWR =      16, // number of bytes to write (1..10)
    parameter SENSI2C_TBL_NBWR_BITS =  4,
    parameter SENSI2C_TBL_NBRD =      16, // number of bytes to read (1 - 8) "0" means "8"
    parameter SENSI2C_TBL_NBRD_BITS =  3,
    parameter SENSI2C_TBL_NABRD =     19, // number of address bytes for read (0 - 1 byte, 1 - 2 bytes)
    parameter SENSI2C_TBL_DLY =       20, // bit delay (number of mclk periods in 1/4 of SCL period)
    parameter SENSI2C_TBL_DLY_BITS=    8,

Auto-generated files also include x393.h, it provides other constant definitions (like valid values for the bit fields) – lines 301..303, and function declarations to access registers. Names of the functions for read-only and write-only are derived from the address symbolic names by converting them to the lower case, the ones which deal with read/write registers have set_ and get_ prefixes attached.

#define X393_CMPRS_CBIT_CMODE_JPEG18           0x00000000 // Color 4:2:0
#define X393_CMPRS_CBIT_FRAMES_SINGLE          0x00000000 // Use single-frame buffer
#define X393_CMPRS_CBIT_FRAMES_MULTI           0x00000001 // Use multi-frame buffer

// Compressor control

void               x393_cmprs_control_reg (x393_cmprs_mode_t d, int cmprs_chn);  // Program compressor channel operation mode
void               set_x393_cmprs_status  (x393_status_ctrl_t d, int cmprs_chn); // Setup compressor status report mode
x393_status_ctrl_t get_x393_cmprs_status  (int cmprs_chn);

Register access functions are implemented with readl() and writel(), this is a corresponding section of the x393.c file:

// Compressor control

void               x393_cmprs_control_reg (x393_cmprs_mode_t d, int cmprs_chn)  {writel(d.d32, mmio_ptr + (0x1800 + 0x40 * cmprs_chn));} // Program compressor channel operation mode
void               set_x393_cmprs_status  (x393_status_ctrl_t d, int cmprs_chn) {writel(d.d32, mmio_ptr + (0x1804 + 0x40 * cmprs_chn));} // Setup compressor status report mode
x393_status_ctrl_t get_x393_cmprs_status  (int cmprs_chn)                       { x393_status_ctrl_t d; d.d32 = readl(mmio_ptr + (0x1804 + 0x40 * cmprs_chn)); return d; }

There are two other header files generated from the same data, one (x393_defs.h) is just an alternative way to represent register addresses – instead of the getter and setter functions it defines the preprocessor macros:

// Compressor control

#define X393_CMPRS_CONTROL_REG(cmprs_chn) (0x40001800 + 0x40 * (cmprs_chn)) // Program compressor channel operation mode, cmprs_chn = 0..3, data type: x393_cmprs_mode_t (wo)
#define X393_CMPRS_STATUS(cmprs_chn)      (0x40001804 + 0x40 * (cmprs_chn)) // Setup compressor status report mode, cmprs_chn = 0..3, data type: x393_status_ctrl_t (rw)

The last generated file – x393_map.h uses the preprocessor macro format to provide a full ordered address map of all the available registers for all channels and sub-channels. It is intended to be used just as a reference for developers, not as an actual include file.

Conclusions

The generated code for Elphel NC393 camera is definitely very hardware-specific, its main purpose is to encapsulate as much as possible of the hardware interface details and so to reduce dependence of the higher layers of software on the modifications of the HDL code. Such tasks are common to other projects that involve CPU/FPGA tandems, and similar approach to organizing software/hardware interface may be useful there too.

by andrey at March 30, 2016 08:04 PM

March 27, 2016

Harald Welte

You can now install a GSM network using apt-get

This is great news: You can now install a GSM network using apt-get!

Thanks to the efforts of Debian developer Ruben Undheim, there's now an OpenBSC (with all its flavors like OsmoBSC, OsmoNITB, OsmoSGSN, ...) package in the official Debian repository.

Here is the link to the e-mail indicating acceptance into Debian: https://tracker.debian.org/news/755641

I think for the past many years into the OpenBSC (and wider Osmocom) projects I always assumed that distribution packaging is not really something all that important, as all the people using OpenBSC surely would be technical enough to build it from the source. And in fact, I believe that building from source brings you one step closer to actually modifying the code, and thus contribution.

Nevertheless, the project has matured to a point where it is not used only by developers anymore, and particularly also (god beware) by people with limited experience with Linux in general. That such people still exist is surprisingly hard to realize for somebody like myself who has spent more than 20 years in Linux land by now.

So all in all, today I think that having packages in a Distribution like Debian actually is important for the further adoption of the project - pretty much like I believe that more and better public documentation is.

Looking forward to seeing the first bug reports reported through bugs.debian.org rather than https://projects.osmocom.org/ . Once that happens, we know that people are actually using the official Debian packages.

As an unrelated side note, the Osmocom project now also has nightly builds available for Debian 7.0, Debian 8.0 and Ubunut 14.04 on both i586 and x86_64 architecture from https://build.opensuse.org/project/show/network:osmocom:nightly. The nightly builds are for people who want to stay on the bleeding edge of the code, but who don't want to go through building everything from scratch. See Holgers post on the openbsc mailing list for more information.

by Harald Welte at March 27, 2016 10:00 PM

March 26, 2016

ZeptoBARS

ST TS321 - generic SOT23 opamp : weekend die-shot

ST TS321 is a single opamp in SOT23-5 package designed to match and exceed industry standard LM358A and LM324 opamps.
Die size 1270x735 µm.


March 26, 2016 08:18 AM

March 24, 2016

Video Circuits

Seeing Sound

I will be giving a talk on some research I have been doing into early British video synthesis and  electronic video work at this years Seeing Sound. I will also be screening some work from some contemporary Video Circuits Regulars as part of the conference.
www.seeingsound.co.uk Sign up here!


by Chris (noreply@blogger.com) at March 24, 2016 08:59 AM

March 22, 2016

Bunnie Studios

Formlabs Form 2 Teardown

I don’t do many teardowns on this blog, as several other websites already do an excellent job of that, but when I was given the chance to take apart a Formlabs Form 2, I was more than happy to oblige. About three yeargalvos ago, I had posted a teardown of a Form 1, which I received as a Kickstarter backer reward. Today, I’m looking at a Form 2 engineering prototype. Now that the Form 2 is in full production, the prototypes are basically spare parts, so I’m going to unleash my inner child and tear this thing apart with no concern about putting it back together again.

For regular readers of this blog, this teardown takes the place of March 2016’s Name that Ware — this time, I’m the one playing Name that Ware and y’all get to follow along as I adventure through the printer. Next month I’ll resume regular Name that Ware content.

First Impressions

I gave the Form 2 a whirl before tearing it into an irreparable pile of spare parts. In short, I’m impressed; the Form 2 is a major upgrade from the Form 1. It’s an interesting contrast to Makerbot. The guts of the Makerbot Replicator 2 are basically the same architecture as previous models, inheriting all the limitations of its previous incarnation.

The Form 2 is a quantum leap forward. The product smells of experienced, seasoned engineers; a throwback to the golden days of Massachusetts Route 128 when DEC, Sun, Polaroid and Wang Laboratories cranked out quality American-designed gear. Formlabs wasn’t afraid to completely rethink, re-architect, and re-engineer the system to build a better product, making bold improvements to core technology. As a result, the most significant commonality between the Form 1 and the Form 2 is the iconic industrial design: an orange acrylic box sitting atop an aluminum base with rounded corners and a fancy edge-lit power button.

Before we slip off the cover, here’s a brief summary of the upgrades that I picked up on while doing the teardown:

  • The CPU is upgraded from a single 72MHz ST Micro STM32F103 Cortex-M3 to a 600 MHz TI Sitara AM3354 Cortex A8, with two co-processors: a STM32F030 as a signal interface processor, and a STM32F373 as a real-time DSP on the galvo driver board.
  • This massive upgrade in CPU power leapfrogs the UI from a single push button plus monochrome OLED on the Form 1, to a full-color 4.3” capacitive touch screen on the Form 2.
  • The upgraded CPU also enables the printer to have built-in wifi & ethernet, in addition to USB. Formlabs thoughtfully combines this new TCP/IP capability with a Bonjour client. Now, computers can automatically discover and enumerate Form 2’s on the local network, making setup a snap.
  • The UI also makes better use of the 4 GB of on-board FLASH by adding the ability to “replay” jobs that were previously uploaded, making the printer more suitable for low volume production.
  • The galvanometers are full custom, soup-to-nuts. We’ll dig into this more later, but presumably this means better accuracy, better print jobs, and a proprietary advantage that makes it much harder for cloners to copy the Form 2.
  • The optics pathway is fully shrouded, eliminating dust buildup problems. A beautiful and much easier to clean AR-coated glass surface protects the internal optics; internal shrouds also limit the opportunity for dust to settle on critical surfaces.
  • The resin tray now features a heater with closed-loop control, for more consistent printing performance in cold New England garages in the dead of winter.
  • The resin tray is now auto-filling from an easy to install cartridge, enabling print jobs that require more resin than could fit in a single tank while making resin top-ups convenient and spill-free.
  • The peel motion is now principally lateral, instead of vertical.
  • The resin tank now features a stirrer. On the Form 1, light scattering would create thickened pools of partially cured resin near the active print region. Presumably the stirrer helps homogenize the resin; I also remember someone once mentioning the importance of oxygen to the surface chemistry of the resin tank.
  • There are novel internal photosensor elements that hint at some sort of calibration/skew correction mechanism
  • There’s a tilt sensor and manual mechanical leveling mechanism. A level tank prevents the resin from pooling to one side.
  • There are sensors that can detect the presence of the resin tank and the level of the resin. With all these new sensors, the only way a user can bork a print is to forget to install the build platform
  • Speaking of tank detection, the printer now remembers what color resin was used on a given tank, so you don’t accidentally spoil a clear resin tank with black resin
  • The power supply is now fully embedded; goodbye PSU failures and weird ground loop issues. It’s a subtle detail, but it’s the sort of “grown-up” thing that younger companies avoid doing because it complicates safety certification and requires compliance to elevated internal wiring and plastic flame retardance standards.
  • I’m also guessing there are a number of upgrades that are less obvious from a visual inspection, such as improvements to the laser itself, or optimizations to the printing algorithm.

    These improvements indicate a significant manpower investment on the part of Formlabs, and an incredible value add to the core product, as many of the items I note above would take several man-months to bring to production-ready status.

    Test Print

    As hinted from the upgrade list, the UI has been massively improved. The touchscreen-based UI features tech-noir themed iconography and animations that would find itself at home in a movie set. This refreshing attention to detail sets the Form 2’s UI apart from the utilitarian “designed-by-programmers-for-geeks” UI typical of most digital fabrication tools.


    A UI that would seem at home on a Hollywood set. Life imitating art imitating life.

    Unfortunately, the test print didn’t go smoothly. Apparently the engineering prototype had a small design problem which caused the resin tray’s identification contacts to intermittently short against the metal case during a peel operation. This would cause the bus shared between the ID chips on the resin tank and the filler cartridge to fail. As a result, the printer paused twice on account of a bogus “missing resin cartridge” error. Thankfully, the problem would eventually fix itself, and the print would automatically resume.


    Test print from the Form 2. The red arrow indicates the location of a hairline artifact from the print pausing for a half hour due to issues with resin cartridge presence detection.

    The test print came out quite nicely, despite the long pauses in printing. There’s only a slight, hairline artifact where the printer had stopped, so that’s good – if the printer actually does run out of resin, the printer can in fact pause without a major impact on print quality.

    Significantly, this problem is fixed in my production unit – with this unit, I’ve had no problems with prints pausing due to the resin cartridge ID issue. It looks like they tweaked the design of the sheet metal around the ID contacts, giving it a bit more clearance and effectively solving the problem. It goes to show how much time and resources are required to vet a product as complex as a 3D printer – with so many sensors, moving parts, and different submodules that have to fit together perfectly throughout a service life involving a million cycles of movement, it takes a lot of discipline to chase down every last detail. So far, my production Form 2 is living up to expectations.

    Removing the Outer Shell

    I love that the Form 2, like the Form 1, uses exclusively hex and torx drive fasteners. No crappy philips or slotted screws here! They also make extensive use of socket cap style, which is a perennial favorite of mine.

    Removing the outer shell and taking a look around, we continue to see evidence of thoughtful engineering. The cable assemblies are all labeled and color-coded; there’s comprehensive detail on chassis grounding; the EMI countermeasures are largely designed-in, as opposed to band-aided at the last minute; and the mechanical engineering got kicked up a notch.

    I appreciated the inclusion of an optical limit switch on the peel drive. The previous generation’s peel mechanism relied on a mechanical clutch with a bit of overdrive, which meant every peel cycle ended with a loud clicking sound. Now, it runs much more quietly, thanks to the feedback of the limit switch.


    Backside of the Form 2 LCD + touchscreen assembly.

    The touchpanel and display are mounted on the outer shell. The display is a DLC0430EZG 480×272 pixel TFT LCD employing a 24-bit RGB interface. I was a bit surprised at the use of a 30-pin ribbon cable to transmit video data between the electronics mainboard and the display assembly, as unshielded ribbon cables are notorious for unintentional RF emissions that complicate the certification process. However, a closer examination of the electronics around the ribbon cable reveal the inclusion of a CMOS-to-LVDS serdes IC on either side of the cable. Although this increases the BOM, the use of differential signaling greatly reduces the emissions footprint of the ribbon cable while improving signal integrity over an extended length of wire.

    Significantly, the capacitive touchpanel’s glass seems to be a full custom job, as indicated by the fitted shape with hole for mounting the power button. The controller IC for the touchpanel is a Tango C44 by PIXCIR, a fabless semiconductor company based out of Suzhou, China. It’s heartening to see that the market for capacitve touchpanels has commoditized to the point where a custom panel makes sense for a relatively low volume product. I remember trying to source captouch solutions back in 2008, just a couple years after the iPhone’s debut popularized capacitive multi-touch sensors. It was hard to get any vendor to return your call if you didn’t have seven figures in your annual volume estimate, and the quoted NRE for custom glass was likewise prohibitive.

    Before leaving the touchpanel and display subsection, I have to note with a slight chuckle the two reference designators (R22 and U4) that are larger than the rest. It’s a purely cosmetic mistake which I recognize because I’ve done it myself several times. From the look of the board, I’m guessing it was designed using Altium. Automatic ECOs in Altium introduce new parts with a goofy huge default designator size, and it’s easy to miss the difference. After all, you spend most of your time editing the PCB with the silkscreen layer turned off.

    The Electronics

    As an electronics geek, my attention was first drawn to the electronics mainboard and the galvanometer driver board. The two are co-mounted on the right hand side of the printer, with a single 2×8 0.1” header spanning the gap between the boards. The mounting seems to be designed for easy swapping of the galvanometer board.

    I have a great appreciation for Formlabs’ choice of using a Variscite SOM (system-on-module). I can speak from first-hand experience, having designed the Novena laptop, that it’s a pain in the ass to integrate a high speed CPU, DDR3 memory, and power management into a single board with complex mixed-signal circuitry. Dropping down a couple BGA’s and routing the DDR3 fly-by topology while managing impedance and length matching is just the beginning of a long series of headaches. You then get to look forward to power sequencing, hardware validation, software drivers, factory testing, yield management and a hundred extra parts in your supply chain. Furthermore, many of the parts involved in the CPU design benefit from economies of scale much larger than can be achieved from this one product alone.

    Thus while it may seem attractive from a BOM standpoint to eliminate the middleman and integrate everything into a single PCB, from a system standpoint the effort may not amortize until the current version of the product has sold a few thousand units. By using a SOM, Formlabs reduces specialized engineering staff, saves months on the product schedule, and gains the option to upgrade their CPU without having to worry about amortization.

    Furthermore, the pitch of the CPU and DDR3 BGAs are optimized for compact designs and assume a 6 or 8-layer PCB with 3 or 4-mil design rules. If you think about it, only the 2 square inches around the CPU and DRAM require these design rules. If the entire design is just a couple square inches, it’s no big deal to fab the entire board using premium design rules. However, the Form 2’s main electronics board is about 30 square inches. Only 2 square inches of this would require the high-spec design rules, meaning they would effectively be fabricating 28 square inches of stepper motor drivers using an 8-layer PCB with 3-mil design rules. The cost to fabricate such a large area of PCB adds up quickly, and by reducing the technology requirement of the larger PCB they probably make up decent ground on the cost overhead of the SOM.

    Significantly, Formlabs was very selective about what they bought from Variscite: the SOM contained neither Wifi nor FLASH memory, even though the SOM itself had provisions for both. These two modules can be integrated onto the mainboard without driving up technology requirements, so Formlabs opted to self-source these components. In essence, they kept Variscite’s mark-up limited to a bare minimum set of components. The maturity to pick and choose cost battles is a hallmark of an engineering team with experience working in a startup environment. Engineers out of large, successful companies are used to working with virtually limitless development budgets and massive purchasing leverage, and typically show less discretion when allocating effort to cost reduction.


    Mainboard assembly with SOM removed; back side of SOM is photoshopped into the image for reference.

    I also like that Formlabs chose to use eMMC FLASH, instead of an SD card, for data storage. It’s probably a little more expensive, but the supply chain for eMMC is a bit more reliable than commodity SD memory. As eMMC is soldered onto the board, J3 was added to program the memory chip after assembly. It looks like the same wires going to the SOM are routed to J3, so the mainboard is probably programmed before the SOM is inserted.

    Formlabs also integrates the stepper motor drivers into the mainboard, instead of using DIP modules like the Makerbot did until at least the Replicator’s Mighty Board Rev E. I think the argument I heard for the DIP modules was serviceability; however, I have to imagine the DIP modules are problematic for thermal management. PCBs are pretty good heatsinks, particularly those with embedded ground planes. Carving up the PCB into tiny modules appreciably increases the thermal resistance between the stepper motor driver and the air around it, which might actually drive up the failure rate. The layout of the stepper motor drivers on the Formlabs mainboard show ample provisions for heat to escape the chips into the PCB through multiple vias and large copper fills.


    Mainboard assembly with annotations according to the discussion in this post.

    Overall, the mainboard was thoughtfully designed and laid out; the engineering team (or engineer) was thinking at a system-level. They managed to escape the “second system effect” by restrained prioritization of engineering effort; just because they raised a pile of money didn’t mean they had to go re-engineer all the things. I also like that the entire layout is single-sided, which simplifies assembly, inspection and testing.

    I learned a lot from reading this board. I’ve often said that reading PCBs is better than reading a textbook for learning electronics design, which is part of the reason I do a monthly Name that Ware. For example, I don’t have extensive experience in designing motor controllers, so next time I need to design a stepper motor driver, I’m probably going to have a look at this PCB for ideas and inspiration – a trivial visual inspection will inform me on what parts they used, the power architecture, trace widths, via counts, noise isolation measures and so forth. Even if the hardware isn’t Open, there’s still a lot that can be learned just by looking at the final design.

    Now, I turn my attention to the galvanometer driver board. This is a truly exciting development! The previous generation used a fully analog driver architecture which I believe is based on an off-the-shelf galvanometer driver. A quick look around this PCB reveals that they’ve abandoned closing the loop in the analog domain, and stuck a microcontroller in the signal processing path. The signal processing is done by a STM32F373 – a 72 MHz, Cortex-M4 with FPU, HW division, and DSP extensions. Further enhancing its role as a signal processing element, the MCU integrates a triplet of 16-bit sigma-delta ADCs and 12-bit DACs. The board also has a smattering of neat-looking support components, such as a MCP42010 digital potentiometer, a fairly handsome OPA4376 precision rail-to-rail op amp, and a beefy LM1876 20W audio amplifier, presumably used to drive the galvanometer voice coils.

    The power for the audio amplifier is derived from a pair of switching regulators, a TPS54336A handling the positive rail, and an LTC3704 handling the negative rail. There’s a small ECO wire on the LTC3704 which turns off burst mode operation; probably a good idea, as burst mode would greatly increase the noise on the negative rail, and in this application standby efficiency isn’t a paramount concern. I’m actually a little surprised they’re able to get the performance they need using switching regulators, but with a 20W load that may have been the only practical option. I guess the switching regulator’s frequency is also much higher than the bandwidth of the galvos, so maybe in practice the switching noise is irrelevant. There is evidence of a couple of tiny SOT-23 LDOs scattered around the PCB to clean up the supplies going to sensitive analog front-end circuitry, and there’s also this curious combination of a FQD7N10L NFET plus MPC6L02 dual op-amp. It looks like they intended the NFET to generate some heat, given the exposed solder slug on the back side, which makes me think this could be a discrete pass-FET LDO of some type. There’s one catch: the MCP6L02 can only operate at up to 6V, and power inside the Form 2 is distributed at 24V. There’s probably something clever going on here that I’m not gathering from a casual inspection of the PCBs; perhaps later I’ll break out some oscope probes to see what’s going on.

    Overall, this ground-up redesign of the galvanometer driver should give Formlabs a strong technological foundation to implement tricks in the digital domain, which sets it apart from clones that still rely upon off-the-shelf fully analog galvanometer driver solutions.

    Before leaving our analysis of the electronics, let’s not forget the main power supply. It’s a Meanwell EPS-65-24-C. The power supply itself isn’t such a big deal, but the choice to include it within the chassis is interesting. Many, if not most, consumer electronic devices prefer to use external power bricks because it greatly simplifies certification. Devices that use voltages below 60V fall into the “easy” category for UL and CE certification. By pulling the power supply into the chassis, they are running line voltages up to 240V inside, which means they have to jump through IEC 60950-1 safety testing. It ups the ante on a number of things, including the internal wiring standards and the flame retardance of any plastics used in the assembly. I’m not sure why they decided to pull the power supply into the chassis; they aren’t using any fancy point-of-load voltage feedback to cancel out IR drops on the cable. My best guess is they felt it would either be a better customer experience to not have to deal with an external power brick, or perhaps they were bitten in the previous generation by flaky power bricks or ground loop/noise issues that sometimes plague devices that use external AC power supplies.

    The Mechanical Platform

    It turns out that my first instinct to rip out the electronics was probably the wrong order for taking apart the Form 2. A closer inspection of the base reveals a set of rounded rectangles that delineate the screws belonging to each physical subsystem within the device. This handy guide makes assembly (and repair) much easier.

    The central set of screws hold down the mechanical platform. Removing those causes the whole motor and optics assembly to pop off cleanly, giving unfettered access to all the electronics.

    I’m oddly excited about the base of the Form 2. It looks like just a humble piece of injection molded plastic. But this is an injection molded piece of plastic designed to withstand the apocalypse. Extensive ribbing makes the base extremely rigid, and resistant to warpage. The base is also molded using glass-filled polymer – the same tough stuff used to make Pelican cases and automotive engine parts. I’ve had the hots for glass-filled polymers recently, and have been itching for an excuse to use it in one of my designs. Glass-filled polymer isn’t for happy-meal toys or shiny gadgets, it’s tough stuff for demanding applications, and it has an innately rugged texture. I’m guessing they went for a bomb-proof base because anything less rigid would lead to problems keeping the resin tank level. Either that, or someone in Formlabs has the same fetish I have for glass filled polymers.

    Once removed from the base, the central mechanical chassis stands upright on its own. Inside this assembly is the Z-axis leadscrew for the build platform, resin level sensor, resin heater, peel motor, resin stirrer, and the optics engine.

    Here’s a close-up of the Z-stepper motor + leadscrew, resin level & temperature sensor, and resin valve actuator. The resin valve actuator is a Vigor Precision BO-7 DC motor with gearbox, used to drive a swinging arm loaded with a spring to provide the returning force. The arm pushes on the integral resin cartridge valve, which looks uncannily like the bite valve from a Camelback.

    The resin tank valve is complimented by the resin tank’s air vent, which also looks uncannily like the top of a shampoo bottle.

    My guess is Formlabs is either buying these items directly from the existing makers of Camelback and shampoo products, in which case First Sale Doctrine means any patent claims that may exist on these has been exhausted, or they have licensed the respective IP to make their own version of each.

    The resin level and temperature sensor assembly is also worth a closer look. It’s a PCB that’s mounted directly behind the resin tank, and in front of the Z-motor leadscrew.


    Backside of the PCB mounted directly behind the resin tank.

    It looks like resin level is measured using a TI FDC1004 capacitive liquid level sensor. I would have thought that capacitive sensing would be too fussy for accurate liquid level sensing, but after reading the datasheet for the FDC1004 I’m a little less skeptical. However, I imagine the sensor is extremely sensitive to all kinds of contamination, the least of which is resin splattered or dripped onto the sensor PCB.


    Detail of the sensor PCB highlighting the non-contact thermopile temperature sensor.

    The resin temperature sense mechanism is also quite interesting. You’ll note a little silvery square, shrouded in plastic, mounted on the PCB behind the resin tank. First of all, the plastic shroud on my unit is clearly a 3D printed piece done by another Formlabs printer. You can see the nubs from the support structure and striation artifacts from the buildup process. I love that they’re dogfooding and using their own products to prototype and test; it’s a bad sign if the engineering team doesn’t believe in their own product enough to use it themselves.

    Unscrewing the 3D printed shroud reveals a curious flip-chip CSP device, which I’m guessing is a TI TMP006 or TMP007 MEMS therompile. Although there are no part numbers on the chip, a quick read through the datasheet reveals a reference layout that is a dead ringer for the pattern on the PCB around the chip. Thermopiles can do non-contact remote temperature sensing, and it looks like this product has an accuracy of about +/-1 C between 0-60C. This explains the mystery of how they’re able to report the resin temperature on the UI without any sort of probe dipping into the resin tank.

    But then how do they heat it? Look under the resin tank mount, and we find another PCB.

    When I first saw this board, I thought its only purpose was to hold the leafspring contacts for the ID chip that helps track individual resin tanks and what color resin was used in them. Flip the PCB over, and you’ll see a curious pinkish tape covering the reverse surface.

    The pinkish tape is actually a thermal gap sealer, and peeling the tape back reveals that the PCB itself has a serpentine trace throughout, which means they are using the resistivity of the copper trace on the PCB itself as a heating mechanism for the resin.

    Again, I wouldn’t have guessed this is something that would work as well as it does, but there you have it. It’s a low-cost mechanism for controlling the temperature of the resin during printing. Probably the PCB material is the most expensive component, even more than the thermopile IR sensor, and all that’s needed to drive the heating element is a beefy BUK9277 NFET.

    I’ve been to the Formlabs offices in Boston, and it does get rather chilly and dry there in the winter, so it makes sense they would consider cold temperature as a variable that could cause printing problems on the Form 2.

    Cold weather isn’t a problem here in Singapore; however, persistent 90% humidity conditions is an issue. If I didn’t use my Form 1 for several weeks, the first print would always come out badly; usually I’d have to toss the resin in the tank and pour a fresh batch for the print to come out. I managed to solve this problem by placing a large pack of desiccant next to the resin tank, as well as using the shipping lid to try to seal out moisture. However, I’m guessing they have very few users in the tropics, so humidity-related print problems are probably going to be a unique edge case I’ll have to solve on my own for some time to come.

    The Optics Pathway

    Finally, the optics – I’m saving the best for last. The optics pathway is the beating heart of the Form 2.


    The last thing uncured resin sees before it turns into plastic.

    The first thing I noticed about the optics is the inclusion of a protective glass panel underneath the resin tank. In the Form 1, if the build platform happened to drip resin while the tank was removed, or if the room was dusty, you had the unenviable task of reaching into the printer to clean the mirror. The glass panel simplifies the cleaning operation while protecting sensitive optics from dust and dirt.

    I love that the protective glass has an AR coating. You can tell there’s an AR coating from the greenish tint of the reflections off the surface of the glass. AR coatings are sexy; if I had a singles profile, you’d see “the green glint of AR-coated glasses” under turn-ons. Of course, the coating is there for functional reasons – any loss of effective laser power due to reflections off of the protective glass would reduce printing efficiency.

    The contamination-control measures don’t just stop at a protective glass cover. Formlabs also provisioned a plastic shroud around the entire optics assembly.


    Bottom view of the mechanical platform showing the protective shrouds hiding the optics.

    Immediately underneath the protective glass sheet is a U-shaped PCB which I can only assume is used for some kind of calibration. The PCB features five phtoodetectors; one mounted in “plain sight” of the laser, and four mounted in the far corners on the reverse side of the PCB, with the detectors facing into the PCB, such that the PCB is obscuring the photodetectors. A single, small pinhole located in the center of each detector allows light to fall onto the obscured photodetectors. However, the size of the pinhole and the dimensional tolerance of the PCB is probably too large for this to be an absolute calibration for the printer. My guess is this is probably used as more of a coarse diagnostic to confirm laser power and range of motion of the galvanometers.

    Popping off the shroud reveals the galvanometer and laser assembly. The galvanometers sport a prominent Formlabs logo. They are a Formlabs original design, and not simply a relabeling of an off the shelf solution. This is a really smart move, especially in the face of increasing pressure from copycats. Focusing resources into building a proprietary galvo is a trifecta for Formlabs: they get distinguished print quality, reduced cost, and a barrier to competition all in one package. Contrast this to Formlabs’ decision to use a SOM for the CPU; if Formlabs can build their own galvo & driver board, they certainly had the technical capability to integrate a CPU into the mainboard. But in terms of priorities, improving the galvo is a much better payout.

    Readers unfamiliar with galvanometers may want to review a Name that Ware I did of a typical galvanometer a while back. In a nutshell, a typical galvanometer consists of a pair of voice coils rotating a permanent magnet affixed to a shaft. The shaft’s angle is measured by an optical feedback system, where a single light source shines onto a paddle affixed to the galvo’s shaft. The paddle alternately occludes light hitting a pair of photodetectors positioned behind the paddle relative to the light source.

    Now, here’s the entire Form 2 galvo assembly laid out in pieces.


    Close-up view of the photoemitter and detector arrangement.

    Significantly, the Form 2 galvo has not two, but four photodetectors, surrounding a single central light source. Instead of a paddle, a notch is cut into the shaft; the notch modulates the light intensity reaching the photodiodes surrounding the central light source according to the angle of the shaft.


    The notched shaft above sits directly above the photoemitter when the PCB is mated to the galvo body.

    This is quite different from the simple galvanometer I had taken apart previously. I don’t know enough about galvos to recognize if this is a novel technique, or what exactly is the improvement they hoped to get by using four photodiodes instead of two. With two photodiodes, you get to subtract out the common mode of the emitter and you’re left with the error signal representing the angle of the shaft: two variables solving for two unknowns. With four photodiodes, they can solve for a couple more unknowns – but what are they? Maybe they are looking to correct for alignment errors of the light source & photodetectors relative to the shaft, wobble due to imperfections in the bearings, or perhaps they’re trying to avoid a dead-spot in the response of the photodiodes as the shaft approaches the extremes of rotation. Or perhaps the explanation is as simple as removing the light-occluding paddle reduces the mass of the shaft assembly, allowing it to rotate faster, and four photodetectors was required to produce an accurate reading out of a notch instead of the paddle. When I reached out to Formlabs to ask about this, someone in the know responded that the new design is an improvement on three issues: more signal leading to an improved SNR, reduced impact of off-axis shaft motion, and reduced thermal drift due to better symmetry.

    This is the shaft plus bearings once it’s pulled out of the body of the galvo. The gray region in the middle is the permanent magnet, and it’s very strong.

    And this is staring back into the galvo with the shaft removed. You can see the edges of the voice coils. I couldn’t remove them from the housing, as they seem to be fixed in place with some kind of epoxy.

    Epilogue
    And there you have it – the Form 2, from taking off its outer metal case down to the guts of its galvanometers. It was a lot of fun tearing down the Form2, and I learned a lot while doing it. I hope you also enjoyed reading this post, and perhaps gleaned a couple useful bits of knowledge along the way.

    If you think Formlabs is doing cool stuff and solving interesting problems, good news: they’re hiring! They have new positions for a Software Lead and an Electrical Systems Lead. Follow the links for a detailed description and application form.

    by bunnie at March 22, 2016 05:58 PM

    Winner, Name that Ware February 2016

    The Ware for February 2016 was indeed a Commodore 65 prototype. As expected, the ware was quite easy to guess, and the prize goes to Philipp Mundhenk. Congrats, email me for your prize!

    Here’s an image of the full motherboard, and its boot screen:

    by bunnie at March 22, 2016 05:50 PM

    Free Electrons

    Free Electrons contributing Linux kernel initial support for Annapurna Labs ARM64 Platform-on-Chip

    Annapurna Labs LogoWe are happy to announce that on February 8th 2016 we submitted to the mainline Linux kernel the initial support for Annapurna Labs Alpine v2 Platform-on-Chip based on the 64-bit ARMv8 architecture.

    See our patch series:

    Annapurna Labs was founded in 2011 in Israel. Annapurna Labs provides 32-bit and 64-bit ARM products including chips and subsystems under the Alpine brand for the home NAS, Gateway and WiFi router equipment, see this page for details. The 32-bit version already has support in the official Linux kernel (see alpine.dtsi), and we have started to add support for the quad core 64-bit version, called Alpine v2, which brings significant performance for the home.

    This is our initial contribution and we plan to follow it with additional Alpine v2 functionality in the near future.

    by Thomas Petazzoni at March 22, 2016 05:38 AM

    March 18, 2016

    Elphel

    NAND flash support for Xilinx Zynq in U-Boot SPL

    Overview

    • Target board: Elphel 10393 (Xilinx Zynq 7Z030) with 1GB NAND flash
    • U-Boot final image files (both support NAND flash commands):
      • boot.bin - SPL image – loaded by Xilinx Zynq BootROM into OCM, no FSBL required
      • u-boot-dtb.img - full image – loaded by boot.bin into RAM
    • Build environment and dependencies (for details see this article) :


     

    The story

    First of all, Ezynq was updated to use the mainstream U-Boot to remove an extra agent (u-boot-xlnx) from the dependency chain. But since the flash driver for Xilinx Zynq hasn’t make it to the mainstream yet it was copied to Ezynq’s source tree for U-Boot. When building the source tree is copied over U-Boot source files. We will make a patch someday.

    Full image (u-boot-dtb.img)

    Next, the support for flash and commands was added to the board configuration for the full u-boot image. Required defines:

    include/configs/elphel393.h (from zynq-common.h in u-boot-xlnx):
    #define CONFIG_NAND_ZYNQ
    #ifdef CONFIG_NAND_ZYNQ
    #define CONFIG_CMD_NAND_LOCK_UNLOCK /*zynq driver doesn't have lock/unlock commands*/
    #define CONFIG_SYS_MAX_NAND_DEVICE 1
    #define CONFIG_SYS_NAND_SELF_INIT
    #define CONFIG_SYS_NAND_ONFI_DETECTION
    #define CONFIG_MTD_DEVICE
    #endif
    #define CONFIG_MTD

    NOTE: original Zynq NAND flash driver for U-Boot (zynq_nand.c) doesn’t have Lock/Unlock commands. Same applies to pl35x_nand.c in the kernel they provide. By design, on power on the NAND flash chip on 10393 is locked (write protected). While these commands were added to both drivers there’s no need for unlock in U-Boot as all of the writing will be performed from OS boot from either flash or micro SD card. Out there some designs with NAND flash do not have flash locked on power on.

    And configs/elphel393_defconfig:

    CONFIG_CMD_NAND=y

    There are few more small modifications to add the driver to the build – see ezynq/u-boot-tree. Anyways, it worked on the board. Easy. Type “nand” in u-boot terminal for available commands.

    SPL image (boot.bin)

    Then the changes for the SPL image were made.

    Currently U-Boot runs twice to build both images. For the SPL run it sets CONFIG_SPL_BUILD, the results are found in spl/ folder. So, in general, if one would like to build U-Boot with SPL supporting NAND flash for some other board he/she should check out common/spl/spl_nand.c for the required functions, they are:

    nand_spl_load_image()
    nand_init() /*no need if drivers/mtd/nand.c is included in the SPL build*/
    nand_deselect() /*usually an empty function*/

    And drivers/mtd/nand/ - for driver examples for SPL – there are not too many of them for some reason.

    For nand_init() I included drivers/mtd/nand.c – it calls board_nand_init() which is found in the driver for the full image – zynq_nand.c.

    Defines in include/configs/elphel393.h:

    #define CONFIG_SPL_NAND_ELPHEL393
    #define CONFIG_SYS_NAND_U_BOOT_OFFS 0x100000 /*look-up in dts!*/
    #define CONFIG_SPL_NAND_SUPPORT
    #define CONFIG_SPL_NAND_DRIVERS
    #define CONFIG_SPL_NAND_INIT
    #define CONFIG_SPL_NAND_BASE
    #define CONFIG_SPL_NAND_ECC
    #define CONFIG_SPL_NAND_BBT
    #define CONFIG_SPL_NAND_IDS
    /* Load U-Boot to this address */
    #define CONFIG_SYS_NAND_U_BOOT_DST CONFIG_SYS_TEXT_BASE
    #define CONFIG_SYS_NAND_U_BOOT_START CONFIG_SYS_NAND_U_BOOT_DST

    CONFIG_SYS_NAND_U_BOOT_OFFS 0×100000 – is the offset in the flash where u-boot-dtb.img is written – this is done in OS. The flash partitions are defined in the device tree for the kernel.

    Again a few small modifications (KConfigs and makefiles) to include everything in the build – see ezynq/u-boot-tree.

    NOTES:

    • Before boot.bin was about 60K (out of 192K available). After everything was included the size is 110K. Well, it fits and so the optimization can be done some time in the future for the driver to have only what is needed – init and read.
    • drivers/mtd/nand/nand_base.c – kzalloc would hang the board – had to change it in the SPL build.
    • drivers/mtd/nand/zynq_nand.c – added timeout for some flash functions (NAND_CMD_RESET) – addresses the case when the board has flash width configured (through MIO pins) but doesn’t carry flash or the flash cannot be detected for some reason. Not having timeout hangs such boards.

    Other Notes

    • With U-Boot moving to KBuild nobody knows what will happen to the CONFIG_EXTRA_ENV_SETTINGS – multi-line define.
    • Current U-Boot uses a stripped down device tree – added to Ezynq.
    • The ideal scenario is to boot from SPL straight to OS – the falcon mode (CONFIG_SPL_OS_BOOT). Consider in future.
    • Tertiary Program Loader (TPL) – no plans.

     

    by Oleg Dzhimiev at March 18, 2016 11:40 PM

    Free FPGA: Reimplement the primitives models

    We added the AHCI SATA controller Verilog code to the rest of the camera FPGA project, together they now use 84% of the Zynq slices. Building the FPGA bitstream file requires proprietary tools, but all the simulation can be done with just the Free Software – Icarus Verilog and GTKWave. Unfortunately it is not possible to distribute a complete set of the files needed – our code instantiates a few FPGA primitives (hard-wired modules of the FPGA) that have proprietary license.

    Please help us to free the FPGA devices for developers by re-implementing the primitives as Verilog modules under GNU GPLv3+ license – in that case we’ll be able to distribute a complete self-sufficient project. The models do not need to provide accurate timing – in many cases (like in ours) just the functional simulation is quite sufficient (combined with the vendor static timing analysis). Many modules are documented in Xilinx user guides, and you may run both the original and replacement models through the simulation tests in parallel, making sure the outputs produce the same signals. It is possible that such designs can be used as student projects when studying Verilog.

    Models we are looking for

    The camera project includes more than 200 Verilog files, and it depends on just 29 primitives from the Xilinx simulation library (total number of the files there is 214):

    • BUFG.v
    • BUFH.v
    • BUFIO.v
    • BUFMR.v
    • BUFR.v
    • DCIRESET.v
    • GLBL.v
    • IBUF.v
    • IBUFDS_GTE2.v
    • IBUFDS.v
    • IDELAYCTRL.v
    • IDELAYE2_FINEDELAY.v
    • IDELAYE2.v
    • IOBUF_DCIEN.v
    • IOBUF.v
    • IOBUFDS_DCIEN.v
    • ISERDESE1.v *
    • MMCME2_ADV.v
    • OBUF.v
    • OBUFT.v
    • OBUFTDS.v
    • ODDR.v
    • ODELAYE2_FINEDELAY.v
    • OSERDESE1.v *
    • PLLE2_ADV.v
    • PS7.v
    • PULLUP.v
    • RAMB18E1.v
    • RAMB36E1.v

    This is just a raw list of the unisims modules referenced in the design, it includes PS7.v – a placeholder model of the ARM processing system, modules for AXI functionality simulation are already included in the project. The implementation is incomplete, but sufficient for the the camera simulation and can be used for other Zynq-based projects. Some primitives are very simple (like DCIRESET), some are much more complex. Two modules (ISERDESE1.v and OSERDESE1.v) in the project are the open-source replacements for the encrypted models of the enhanced hardware in Zynq (ISERDESE2.v and OSERDESE2.v) – we used a simple ifdef wrapper that selects reduced (but sufficient for us) functionality of the earlier open source model for simulation and the current “black box” for synthesis.

    The files list above includes all the files we need for our current project, as soon as the Free Software replacement will be available we will be able to distribute the self-sufficient project. Other FPGA development projects may need other primitives, so ideally we would like to see all of the primitives to have free models for simulation.

    Why is it important

    Elphel is developing high-performance products based on the FPGA desings that we believe are created for Freedom. We share all the code with our users under GNU General Public License version 3 (or later) but the project depends on proprietary tools distributed by vendors who have monopoly on the tools for their silicon.

    There are very interesting projects (like icoBOARD) that use smaller devices with completely Free toolchain (Yosys), but the work of those developers is seriously complicated by non-cooperation of the FPGA vendors. I hope that in the future there will be laws that will limit the monopoly of the device manufacturers and require complete documentation for the products they release to the public. There are advanced patent laws that can protect the FPGA manufacturers and their inventions from the competitors, there is no real need for them to fight against their users by hiding the documentation for the products.

    Otherwise this secrecy and “Security through Obscurity” will eventually (and rather soon) lead to a very insecure world where all those self-driving cars, “smart homes” will obey not us, but just the “bad guys” as the current software malware will get to even deeper hardware level. It is very naive to believe that they (the manufacturers) are ultimate masters and have the complete control of “their” devices of ever growing complexity. Unfortunately they do not realize this and are still living in the 20-th century dreams, treating their users as kids who can only play with “Lego blocks” and believe in powerful Wizards who pretend to know everything.

    We use proprietary toolchain for implementation, but exclusively Free tools – for simulation

    Our projects require devices that are more advanced than those that already can be programmed with independently designed Free Software tools, so we have to use the proprietary ones. Freeing the simulation seems to be achievable, and we made a step in this direction – made the whole project simulation possible with the Free Software. Working with the HDL code and simulating it takes most part of the FPGA design cycle, in our experience it is 2/3 – 3/4, and only the remaining part involves running the toolchain and test/troubleshoot the hardware. The last step (hardware troubleshooting) can also be done without any proprietary software – we never used any in this project that utilizes most of the Xilinx Zynq FPGA resources. Combination of the Verilog modules and extensible Python programs that run on the target devices proved to be a working and convenient solution that keeps the developer in the full control of the process. These programs read the Verilog header files with parameter definitions to synchronize register and bit fields addresses between the hardware and the software that uses them.

    Important role of the device primitives models

    Modern FPGA include many hard-wired embedded modules that supplement the uniform “sea of gates” – addition of such modules significantly increases performance of the device while preserves its flexibility. The modules include memory blocks, DSP slices, PLL circuits, serial-to-parallel and parallel-to-serial converters, programmable delays, high-speed serial transceivers, processor cores and more. Some modules can be automatically extracted by the synthesis software from the source HDL code, but in many cases we have to directly instantiate such primitives in the code, and this code now directly references the device primitives.

    The less of the primitives are directly instantiated in the project – the more portable (not tied to a particular FPGA architecture) it is, but in some cases synthesis tools (they are proprietary, so not fixable by the users) incorrectly extract the primitives, in other – the module functionality is very specific to the device and the synthesis tool will not even try to recognize it in the behavioral Verilog code.

    Even open source proprietary modules are inconvenient

    In earlier days Xilinx was providing all of their primitives models as open source code (but under non-free license), so it was possible to use Free Software tools to simulate the design. But even then it was not so convenient for both our users and ourselves.

    It is not possible to distribute the proprietary code with the projects, so our users had to register with the FPGA manufacturer, download the multi-gigabyte software distribution and agree to the specific license terms before they were able to extract those primitives models missing from our project repository. The software license includes the requirement to install mandatory spyware that you give a permission to transfer your files to the manufacturer – this may be unacceptable for many of our users.

    It is also inconvenient for ourselves. The primitives models provided by the manufacturer sometimes have problems – either do not match the actual hardware or lack full compatibility with the simulator programs we use. In such cases we were providing patches that can be applied to the code provided by the manufacturer. If Xilinx kept them in a public Git repository, we could base our patches on particular tags or commits, but it is not the case and the manufacturer/software provider preserves the right to change the distributed files at any time without notice. So we have to update the patches to maintain the simulation working even we did not change a single line in the code.

    Encrippled modules are unacceptable

    When I started working on the FPGA design for Zynq I was surprised to notice that Xilinx abandoned a practice to provide the source code for the simulation models for the device primitives. The new versions of the older primitives (such as ISERDESE2.v and OSERDESE2.v instead of the previous ISERDESE1.v and OSERDESE1.v) now come in encrippled (crippled by encryption) form while they were open-sourced before. And it is likely this alarming tendency will continue – many proprietary vendors are hiding the source code just because they are not so proud about its quality and can not resist a temptation to encrypt it instead of removing the obsolete statements and updating the code to the modern standards.

    Such code is not just inconvenient, it is completely unacceptable for our design process. The first obvious reason is that it is not compatible with the most important development tool – a simulator. Xilinx provides decryption keys to trusted vendors of proprietary simulators and I do not have plans to abandon my choice of the tool just because the FPGA manufacturer prefers a different one.

    Personally I would not use any “black boxes” even if Icarus supported them – the nature of the FPGA design is already rather complex to spend any extra time of your life on guessing – why this “black box” behaves differently than expected. And all the “black boxes” and “wizards” are always limited and do not 100% match the real hardware. That is normal, when they cover most of the cases and you have the ability to peek inside when something goes wrong, so you can isolate the bug and (if it is actually a bug of the model – not your code) report it precisely and find the solution with the manufacturer support. Reporting problems in a form “my design does not work with your black box” is rather useless even when you provide all your code – it will be a difficult task for the support team to troubleshoot a mixture of your and their code – something you could do yourself better.

    So far we used two different solutions to handle encrypted modules. In one case when the older non-crippled model was available we just used the older version for the new hardware, the other one required complete re-implementation of the GTX serial transceiver model. The current code has many limitations even with its 3000+ lines of code, but it proved to be sufficient for the SATA controller development.

    Additional permission under GNU GPL version 3 section 7

    GNU General Public License Version 3 offers a tool to apply the license in a still “grey area” of the FPGA code. When we were using earlier GPLv2 for the FPGA projects we realized that it was more a statement of intentions than a binding license – FPGA bitstream as well as the simulation inevitably combined free and proprietary components. It was OK for us as the copyright holders, but would make it impossible for others to distribute their derivative projects in a GPL-compliant way. Version 3 has a Section 7 that can be used to give the permission for distribution of the derivative projects that depend on non-free components that are still needed to:

    1. generate a bitstream (equivalent to a software “binary”) file and
    2. simulate the design with Free Software tools

    The GPL requirement to provide other components under the same license terms when distributing the combined work remains in force – it is not possible to mix this code with any other non-free code. The following is our wording of the additional permission as included in every Verilog file header in Elphel FPGA projects.

    Additional permission under GNU GPL version 3 section 7:
    If you modify this Program, or any covered work, by linking or combining it
    with independent modules provided by the FPGA vendor only (this permission
    does not extend to any 3-rd party modules, "soft cores" or macros) under
    different license terms solely for the purpose of generating binary "bitstream"
    files and/or simulating the code, the copyright holders of this Program give
    you the right to distribute the covered work without those independent modules
    as long as the source code for them is available from the FPGA vendor free of
    charge, and there is no dependence on any encrypted modules for simulating of
    the combined code. This permission applies to you if the distributed code
    contains all the components and scripts required to completely simulate it
    with at least one of the Free Software programs.

    Available documentation for Xilinx FPGA primitives

    Xilinx has User Guides files available for download on their web site, some of the following links include release version and may change in the future. These files provide valuable information needed to re-implement the simulation models.

    • UG953 Vivado Design Suite 7 Series FPGA and Zynq-7000 All Programmable SoC Libraries Guide lists all the primitives, their I/O ports and attributes
    • UG474 7 Series FPGAs Configurable Logic Block has description of the CLB primitives
    • UG473 7 Series FPGAs Memory Resources has description for Block RAM modules, ports, attributes and operation of these modules
    • UG472 7 Series FPGAs Clocking Resources provides information for the clock buffering (BUF*) primitives and clock management tiles – MMCM and PLL primitives of the library
    • UG471 7 Series FPGAs SelectIO Resources covers advanced I/O primitives, including DCI, programmable I/O delays elements and serializers/deserializers, I/O FIFO elements
    • UG476 7 Series FPGAs GTX/GTH Transceivers is dedicated to the high speed serial transceivers. Simulation models for these modules are partially re-implemented for use in AHCI SATA Controller.

    by andrey at March 18, 2016 10:42 PM

    March 15, 2016

    Elphel

    AHCI platform driver

    AHCI PLATFORM DRIVER

    In kernels prior to 2.6.x AHCI was only supported through PCI and hence required custom patches to support platform AHCI implementation. All modern kernels have SATA support as part of AHCI framework which significantly simplifies driver development. Platform drivers follow the standard driver model convention which is described in Documentation/driver-model/platform.txt in kernel source tree and provide methods called during discovery or enumeration in their platform_driver structure. This structure is used to register platform driver and is passed to module_platform_driver() helper macro which replaces module_init() and module_exit() functions. We redefined probe() and remove() methods of platform_driver in our driver to initialize/deinitialize resources defined in device tree and allocate/deallocate memory for driver specific structure. We also opted to resource-managed function devm_kzalloc() as it seems to be preferred way of resource allocation in modern drivers. The memory allocated with resource-managed function is associated with the device and will be freed automatically after driver is unloaded.

    HARDWARE LIMITATIONS

    As Andrey has already pointed out in his post, current implementation of AHCI controller has several limitations and our platform driver is affected by two of them.
    First, there is a deviation from AHCI specification which should be considered during platform driver implementation. The specification defines that host bus adapter uses system memory for the Command List Structure, Received FIS Structure and Command Tables. The common approach in platform drivers is to allocate a block of system memory with single dmam_alloc_coherent() call, set pointers to different structures inside this block and store these pointers in port specific structure ahci_port_priv. The first two of these structures in x393_sata are stored in the FPGA RAM blocks and mapped to register memory as it was easier to make them this way. Thus we need to allocate a block of system memory for Command Tables only and set other pointers to predefined addresses.
    Second, and the most significant one from the driver’s point of view, proved to be single command slot implemented. Low level drivers assume that all 32 slots in Command List Structure are implemented and explicitly use the last slot for internal commands in ata_exec_internal_sg() function as shown in the following code snippet:
    struct ata_queued_cmd *qc;
    unsigned int tag, preempted_tag;
     
    if (ap->ops->error_handler)
        tag = ATA_TAG_INTERNAL;
    else
        tag = 0;
    qc = __ata_qc_from_tag(ap, tag);

    ATA_TAG_INTERNAL is defined in libata.h and reserved for internal commands. We wanted to keep all the code of our driver in our own sources and make as fewer changes to existing Linux drivers as possible to simplify further development and upgrade to newer kernels. So we decided that substitution of the command tag in our own code which handles command preparation would be the easiest way of fixing this issue.

    DRIVER STRUCTURES

    Proper platform driver initialization requires that several structures to be prepared and passed to platform functions during driver probing. One of them is scsi_host_template and it serves as a direct interface between middle level drivers and low level drivers. Most AHCI drivers use default AHCI_SHT macro to fill the structure with predefined values. This structure contains a field called .can_queue which is of particular interest for us. The .can_queue field sets the maximum number of simultaneous commands the host bus adapter can accept and this is the way to tell middle level drivers that our controller has only one command slot. The scsi_host_template structure was redefined in our driver as follows:
    static struct scsi_host_template ahci_platform_sht = {
        AHCI_SHT(DRV_NAME),
        .can_queue = 1,
        .sg_tablesize = AHCI_MAX_SG,
        .dma_boundary = AHCI_DMA_BOUNDARY,
        .shost_attrs = ahci_shost_attrs,
        .sdev_attrs = ahci_sdev_attrs,
    };

    Unfortunately, ATA layer driver does not take into consideration the value we set in this template and uses hard coded tag value for its internal commands as I pointed out earlier, so we had to fix this in command preparation handler.
    ata_port_operations is another important driver structure as it controls how the low level driver interfaces with upper layers. This structure is defined as follows:
    static struct ata_port_operations ahci_elphel_ops = {
        .inherits = &ahci_ops,
        .port_start = elphel_port_start,
        .qc_prep = elphel_qc_prep,
    };

    The port start and command preparation handlers were redefined to add some implementation specific code. .port_start is used to allocate memory for Command Table and set pointers to Command List Structure and Received FIS Structure. We decided to use streaming DMA mapping instead of coherent DMA mapping used in generic AHCI driver as explained in Andrey’s article. .qc_prep is used to change the tag of current command and organize proper access to DMA mapped buffer.

    PERFORMANCE CONSIDERATIONS

    We used debug code in the driver along with profiling code in the controller to estimate overall performance and found out that upper driver layers introduce significant delays in command execution sequence. The delay between last DMA transaction in a sequence of transactions and next command could be as high as 2 ms. There are various sources of overhead which could lead to delays, for instance, file system operations and context switches in the operating system. We will try to use read/write operations on a raw device to improve performance.

    LINKS

    AHCI/SATA stack under GNU GPL
    GitHub: AHCI driver source code

    by Mikhail Karpenko at March 15, 2016 02:16 AM

    March 14, 2016

    Harald Welte

    Open Source mobile communications, security research and contributions

    While preparing my presentation for the Troopers 2016 TelcoSecDay I was thinking once again about the importance of having FOSS implementations of cellular protocol stacks, interfaces and network elements in order to enable security researches (aka Hackers) to work on improving security in mobile communications.

    From the very beginning, this was the motivation of creating OpenBSC and OsmocomBB: To enable more research in this area, to make it at least in some ways easier to work in this field. To close a little bit of the massive gap on how easy it is to do applied security research (aka hacking) in the TCP/IP/Internet world vs. the cellular world.

    We have definitely succeeded in that. Many people have successfully the various Osmocom projects in order to do cellular security research, and I'm very happy about that.

    However, there is a back-side to that, which I'm less happy about. In those past eight years, we have not managed to attract significant amount of contributions to the Osmocom projects from those people that benefit most from it: Neither from those very security researchers that use it in the first place, nor from the Telecom industry as a whole.

    I can understand that the large telecom equipment suppliers may think that FOSS implementations are somewhat a competition and thus might not be particularly enthusiastic about contributing. However, the story for the cellular operators and the IT security crowd is definitely quite different. They should have no good reason not to contribute.

    So as a result of that, we still have a relatively small amount of people contributing to Osmocom projects, which is a pity. They can currently be divided into two groups:

    • the enthusiasts: People contributing because they are enthusiastic about cellular protocols and technologies.
    • the commercial users, who operate 2G/2.5G networks based on the Osmocom protocol stack and who either contribute directly or fund development work at sysmocom. They typically operate small/private networks, so if they want data, they simply use Wifi. There's thus not a big interest or need in 3G or 4G technologies.

    On the other hand, the security folks would love to have 3G and 4G implementations that they could use to talk to either mobile devices over a radio interface, or towards the wired infrastructure components in the radio access and core networks. But we don't see significant contributions from that sphere, and I wonder why that is.

    At least that part of the IT security industry that I know typically works with very comfortable budgets and profit rates, and investing in better infrastructure/tools is not charity anyway, but an actual investment into working more efficiently and/or extending the possible scope of related pen-testing or audits.

    So it seems we might want to think what we could do in order to motivate such interested potential users of FOSS 3G/4G to contribute to it by either writing code or funding associated developments...

    If you have any thoughts on that, feel free to share them with me by e-mail to laforge@gnumonks.org.

    by Harald Welte at March 14, 2016 11:00 PM

    TelcoSecDay 2016: Open Source Network Elements for Security Analysis of Mobile Networks

    Today I had the pleasure of presenting about Open Source Network Elements for Security Analysis of Mobile Networks at the Troopers 2016 TelcoSecDay.

    The main topics addressed by this presentation are:

    • Importance of Free and Open Source Software implementations of cellular network protocol stacks / interfaces / network elements for applied telecom security research
    • The progress we've made at Osmocom over the last eight years.
    • An overview about our current efforts to implement at 3G Network similar to the existing 2G/2.5G/2.75G implementations.

    There are no audio or video recordings of this session.

    Slides are available at http://git.gnumonks.org/index.html/laforge-slides/plain/2016/telcosecday/foss-gsm.html

    by Harald Welte at March 14, 2016 11:00 PM

    March 13, 2016

    Bunnie Studios

    Preparing for Production of The Essential Guide To Electronics in Shenzhen

    The crowd funding campaign for The Essential Guide to Electronics in Shenzhen is about to wrap up in a couple of days.

    I’ve already started the process of preparing the printing factory for production. Last week, I made another visit to the facility, to discuss production forecasts, lead time and review the latest iteration of the book’s prototype. It’s getting pretty close. I’m now using a heavy, laminated cardstock for the tabbed section dividers to improve their durability. The improved tabs pushes up the cost of the book, and more significantly, pushes the shipping weight of the book over 16 oz, which means I’m now paying a higher rate for postage. However, this is mostly offset by the higher print volume, so I can mitigate the unexpected extra costs.

    The printing factory has a lot of mesmerizing machines running on the floor, like this automatic cover binder for perfect-bound books:

    And this high speed two-color printing press:

    This is probably the very press that the book will be printed on. The paper moves so fast that it’s just a blur as an animated gif. I estimate it does about 150 pages per minute, and each page is about a meter across, which gives it an effective throughput of over a thousand book-sized pages per minute. Even for a run of a couple thousand books, this machine would only print for about 15 minutes before it has to stop for a printing plate swap, an operation which takes a few minutes to complete. This explains why books don’t get really cheap until the volume reaches tens of thousands of copies.

    Above is the holepunch used for building prototypes of ring-bound books. The production punch is done using a semi-automated high-volume die-cutter, but for the test prints, this is the machine used to punch out the holes.

    Sorry, your browser does not support the video tag.

    The ring binding itself is done by a fairly simple machine. The video above shows the process used to adjust the machine’s height for a single shot on the prototype book. In a production scenario, there would be a few workers on the table to the left of the binding machine aligning pages, adding the covers, and inserting the ring stock. Contrast this to the fully automated perfect binding machine shown at the top of this post — ring binding is a much more expensive binding style in this factory, since they haven’t automated the process (yet).

    I also got a chance to see the machine that gilds and debosses the book cover. It’s a bit of a different process than the edge-gilding I described in the previous post about designing the cover.

    Here, an aluminum plate is first made with the deboss pattern. It looks pretty neat — I’ve half a mind to ask the laoban if he’d save the used plates for me to keep as a souvenir, although the last thing I need in my tiny flat in Singapore is more junk.

    The plate is then glued into a huge press. This versatile machine can do debossing, die cutting, and gilding on sheets of paper as large as A0. For the gilding operation, the mounting face for the aluminum plate is heated to around 130 degrees Celsius.

    I think it’s kind of cute how they put good luck seals all over the machines. The characters say “kai gong da ji” which literally translated means “start operation, big luck”. I don’t know what’s the underlying reason — maybe it’s to wish good luck on the machine, the factory, or the operator; or maybe fix its feng shui, or some kind of voodoo to keep the darned thing from breaking down again. I’ll have to remember to ask what’s the reason for the sticker next time I visit.

    Once at temperature, the gilding foil is drawn over the plate, and the alignment of the plate is determined by doing a test shot onto a transparent plastic sheet. The blank cover is then slid under the sheet, taped in place, and the clear sheet removed.

    The actual pressing step is very fast — so fast I didn’t have a chance to turn my camera into video mode, so I only have a series of three photos to show the before, pressing, and after states.

    And here’s a photo of me with the factory laoban (boss), showing off the latest prototype. I’ve often said that if you can’t meet the laoban, the factory’s too big for you. Having a direct relationship with the laoban has been helpful for this project; he’s very patiently addressed all my strange customization requests, and as a side bonus he seems to know all the good restaurants in the area so the after-work meals are usually pretty delicious.

    I’m looking forward to getting production started on the book, and getting all the pledge rewards delivered on-time. Now’s the last chance to back the crowd funding campaign and get the book at a discounted price. I will order some extra copies of the book, but it’s been hard to estimate demand, so there’s a risk the book could sell out soon after the campaign concludes.

    by bunnie at March 13, 2016 02:11 PM

    March 12, 2016

    Elphel

    AHCI/SATA stack under GNU GPL

    Implementation includes AHCI SATA host adapter in Verilog under GNU GPLv3+ and a software driver for GNU/Linux running on Xilinx Zynq. Complete project is simulated with Icarus Verilog, no encrypted modules are required.

    This concludes the last major FPGA development step in our race against finished camera parts and boards already arriving to Elphel facility before the NC393 can be shipped to our customers.

    Fig. 1. AHCI Host Adapter block diagram

    Fig. 1. AHCI Host Adapter block diagram


    Why did we need SATA?

    Elphel cameras started as network cameras – devices attached to and controlled over the Ethernet, the previous generations used 100Mbps connection (limited by the SoC hardware), and NC393 uses GigE. But this bandwidth is still not sufficient as many camera applications require high image quality (compared to “raw”) without compression artifacts that are always present (even if not noticeable by the human viewer) with the video codecs. Recording video/images to some storage media is definitely an option and we used it in the older camera too, but the SoC IDE controller limited the recording speed to just 16MB/s. It was about twice more than the 100Mb/s network, but still was a bottleneck for the system in many cases. The NC393 can generate 12 times the pixel rate (4 simultaneous channels instead of a single one, each running 3 times faster) of the NC353 so we need 200MB/s recording speed to keep the same compression quality at the increased maximal frame rate, higher recording rate that the modern SSD are capable of is very desirable.

    Fig.2. SATA routing

    Fig.2. SATA routing: a) Camera records data to the internal SSD; b) Host computer connects directly to the internal SSD; c) Camera records to the external mass storage device

    The most universal ways to attach mass storage device to the camera would be USB, SATA and PCIe. USB-2 is too slow, USB-3 is not available in Xilinx Zynq that we use. So what remains are SATA and PCIe. Both interfaces are possible to implement in Zynq, but PCIe (being faster as it uses multiple lanes) is good for the internal storage while SATA (in the form of eSATA) can be used to connect external storage devices too. We may consider adding PCIe capability to boost recording speed, but for initial implementation the SATA seems to be more universal, especially when using a trick we tested in Eyesis series of cameras for fast unloading of the recorded data.

    Routing SATA in the camera

    It is a solution similar to USB On-The-Go (similar term for SATA is used for unrelated devices), where the same connector is used to interface a smartphone to the host PC (PC is a host, a smartphone – a device) and to connect a keyboard or other device when a phone becomes a host. In contrast to the USB cables the eSATA ones always had identical connectors on both ends so nothing prevented to physically link two computers or two external drives together. As eSATA does not carry power it is safe to do, but nothing will work – two computers will not talk to each other and the storage devices will not be able to copy data between them. One of the reasons is that two signal pairs in SATA cable are uni-directional – pair A is output for the host and input for device, pair B – the opposite.

    Camera uses Vitesse (now Microsemi) VSC3304 crosspoint switch (Eyesis uses larger VSC3312) that has a very useful feature – it has reversible I/O ports, so the same physical pins can be configured as inputs or outputs, making it possible to use a single eSATA connector in both host and device mode. Additionally VSC3304 allows to change the output signal level (eSATA requires higher swing than the internal SATA) and perform analog signal correction on both inputs and outputs facilitating maintaining signal integrity between attached SATA devices.

    Aren’t SATA implementations for Xilinx Zynq already available?

    Yes and no. When starting the NC393 development I contacted Ashwin Mendon who already had SATA-2 working on Xilinx Virtex. The code is available on OpenCores under GNU GPL license. There is an article published by IEEE . The article turned out to be very useful for our work, but the code itself had to be mostly re-written – it was still for different hardware and were not able to simulate the core as it depends on Xilinx proprietary encrypted primitives – a feature not compatible with the free software simulators we use.

    Other implementations we could find (including complete commercial solution for Xilinx Zynq) have licenses not compatible with the GNU GPLv3+, and as the FPGA code is “compiled” to a single “binary” (bitstream file) it is not possible to mix free and proprietary code in the same design.

    Implementation

    The SATA host adapter is implemented for Elphel NC393 camera, 10393 system board documentation is on our wiki page. The Verilog code is hosted at GitHub, the GNU/Linux driver ahci_elphel.c is also there (it is the only hardware-specific driver file required). The repository contains a complete setup for simulation with Icarus Verilog and synthesis/implementation with Xilinx tools as a VDT (plugin for Eclipse IDE) project.

    Current limitations

    The current project was designed to be a minimal useful implementation with provisions to future enhancements. Here is the list of what is not yet done:

    • It is only SATA2 (3GHz) while the hardware is SATA3(6GHz) capable. We will definitely work on the SATA3 after we will complete migration to the new camera platform. Most of the project modules are already designed for the higher data rate.
    • No scrambling of outgoing primitives, only recognizing incoming ones. Generation of CONTp is optional by SATA standard, but we will definitely add this as it reduces EMI and we already implemented multiple hardware measures in this direction. Most likely we will need it for the CE certification.
    • No FIS-based switching for port multipliers.
    • Single command slot, and no NCQ. This functionality is optional in AHCI, but it will be added – not much is missing in the current design.
    • No power management. We will look for the best way to handle it as some of the hardware control (like DevSleep) requires i2c communication with the interface board, not directly under FPGA control. Same with the crosspoint switch.

    There is also a deviation from the AHCI standard that I first considered temporary but now think it will stay this way. AHCI specifies that a Command list structure (array of 32 8-DWORD command headers) and a 256-byte Received FIS structure are stored in the system memory. On the other hand these structures need non-paged memory, are rather small and require access from both CPU and the hardware. In x393_sata these structures are mapped to the register memory (stored in the FPGA RAM blocks) – not to the regular system memory. When working with the AHCI driver we noticed that it is even simpler to do it that way. The command tables themselves that involve more data passing form the software to device (especially PRDT – physical region descriptor tables generated from the scatter-gather lists of allocated data memory) are stored in the system memory as required and are read to the hardware by the DMA engine of the controller.

    As of today the code is still not yet cleaned up from temporary debug additions. It will all be done in the next couple weeks as we need to combine this code with the large camera-specific code – SATA controller (~6% of the FPGA resources) was developed separately from the rest of the code (~80% resources) as it makes both simulation and synthesis iterations much faster.

    Extras

    This implementation includes some additions functionality controlled by Verilog `ifdef directives. Two full block RAM primitives as used for capturing data in the controller. One of these “datascopes” captures incoming data right after 10b/8b decoder – it can store either 1024 samples of the incoming data combined of 16 bit of data plus attributes or the compact form when each 32-bit primitive is decoded and the result is a 5-bit primitive/error number. In that case 6*1024 primitives are recorded – 3 times longer than the longest FIS.

    Another 4KB memory block is used for profiling – the controller timestamps and records first 5 DWORDs of each each incoming and outgoing FIS, additionally it timestamps software writes to the specific location allowing mixed software/hardware profiling.

    This project implements run-time access to the primitive attributes using Xilinx DRP port of the GTX elements, same interface is used to programmatically change the logical values of the configuration inputs, making it significantly simpler to guess how the partially documented attributes change the device functionality. We will definitely need it when upgrading to SATA3.

    Code description

    Top connections

    The controller uses 3 differential I/O pads of the device – one input pair (RX on Fig.1) and one output pair (TX) make up a SATA port, additional dedicated input pair (CLK) provides 150MHz that synchronizes most of the controller and the transmit channel of the Zynq GTX module. In the 10393 board uses SI53338 spread-spectrum capable programmable clock to drive this input.

    Xilinx conventions tell that the top level module should instantiate the SoC Processing System PS7 (I would consider connections to the PS7 as I/O ports), so the top module does exactly that and connects to AXI ports of the actual design top module to the MAXIGP1 and SAXIHP3 ports of the PS7, IRQF2P[0] provides interrupt signal to the CPU. MAXIGP1 is one of the two 32-bit AXI ports where CPU is master – it is used for PIO access to the controller register memory (and read out debug information), SAXIHP3 is one of the 4 “high performance” 64-bit wide paths, this port is used by the controller DMA engine to transfer command tables and data to/from the device. The port numbers are selected to match ones unused in the camera specific code, other designs may have different assignments.

    Clocks and clock domains

    Current SATA2 implementation uses 4 different clock domains, some may be shared with other unrelated modules or have the same source.

    1. aclk is used in MAXIGP1 channel and part of the MAXI REGISTERS module synchronizing AXI-pointing port of the dual-port block RAM that implements controller registers. 150 MHz (maximal permitted frequency) is used, it is generated from one of the PS7 FPGA clocks
    2. hclk is used in AXI HP3 channel, DMA Control and parts of the H2D CCD FIFO (host-to-device cross clock domain FIFO ), D2H CCD FIFO and AFI ABORT modules synchronizing. 150 MHz (maximal permitted frequency) is used, same as the aclk
    3. mclk is used throughout most of the other modules of the controller except parts of the GTX, COMMA, 10b8 and input parts of the ELASTIC. For current SATA2 implementation it is 75MHz, this clock is derived from the external clock input and is not synchronous with the first two
    4. xclk – source-synchronous clock extracted from the incoming SATA data. It drives COMMA and 10b8 modules, ELASTIC allows data to cross clock boundaries by adding/removing ALIGNp primitives

    ahci_sata_layers

    The two lower layers of the stack (phy and link) that are independent of the controller system interface (AHCI) are instantiated in ahci_sata_layers.v module together with the 2 FIFO buffers for D2H (incoming) and H2D outgoing data.

    SATA PHY

    SATA PHY layer Contains the OOB (Out Of Band) state machine responsible for handling COMRESET,COMINIT and COMWAKE signals, the rest is just a wrapper for the functionality of the Xilinx GTX transceiver. This device includes both high-speed elements and some blocks that can be synthesized using FPGA fabric. Xilinx does not provide the source code for the GTX simulation module and we were not able to match the hardware operation to the documentation, so in the current design we use only those parts of the GTXE2_CHANNEL primitive that can not be replaced by the fabric. Other modules are implemented as regular Verilog code included in the x393_sata project. There is a gtx_wrap module in the design that has the same input/output ports as the primitive allowing to select which features are handled by the primitive and which – by the Verilog code without changing the rest of the design.
    The GTX primitive itself can not be simulated with the tools we use, so the simulation module was replaced, and Verilog `ifdef directive switches between the simulation model and non-free primitive for synthesis. The same approach we used earlier with other Xilinx proprietary primitives.

    Link

    Link module implements SATA link state machine, scrambling/descrambling of the data, calculates CRC for transmitted data and verifies CRC for the received one. SATA does not transmit and receive data simultaneously (only control primitives), so both CRC and scrambler modules have a single instance each providing dual functionality. This module required most troubleshooting and modifications during testing the hardware with different SSD – at some stages controller worked with some of them, but not with others.

    ahci_top

    Other modules of the design are included in the ahci_top. Of them the largest is the DMA engine shown as a separate block on the Fig.1.

    DMA

    DMA engine makes use of one of the Zynq 64-bit AXI HP ports. This channel includes FIFO buffers on the data and address subchannels (4 total) – that makes interfacing rather simple. The hard task is resetting the channels after failed communication of the controller with the device – even reloading bitsteam and resetting the FPGA would not help (actually it makes things even worse). I searched Xilinx support forum and found that similar questions where only discussed between the users, there was no authoritative recommendation from Xilinx staff. I added axi_hp_abort module that watches over the I/O transactions and keeps track of what was sent to the FIFO buffers, being able to complete transactions and drain buffers when requested.

    The DMA module reads command table, saves command data in the memory block to be later read by the FIS TRANSMIT module, it then reads the scatter-gather memory descriptors (PRDT) (supporting pre-fetch if enabled) and reads/writes the data itself combining the fragments.

    On the controller side data that comes out towards the device (H2D CCD FIFO) and coming from device(D2H CCD FIFO) needs to cross the clock boundary between hclk and mclk, and handle alignment issues. AXI HP operates in 64-bit mode, data to/from the link layer is 32-bit wide and AHCI allows alignment to the even number of bytes (16bits). When reading from the device the cross-clock domain FIFO module does it in a single step, combining 32-bit incoming DWORDs into 64-bit ones and using a barrel shifter (with 16-bit granularity) to align data to the 64-bit memory QWORDs – the AXI HP channel provides per-byte write mask that makes it rather easy. The H2D data is converted in 2 steps: First it crosses the clock domain boundary being simultaneously transformed to 32-bit with a 2-bit word mask that tells which of the two words in each DWORD are valid. Additional module WORD STUFFER operates in mclk domain and consolidates incoming sparse DWORDs into full outgoing DWORDs to be sent to the link layer.

    AHCI

    The rest of the ahci_top module is shown as AHCI block. AHCI standard specifies multiple registers and register groups that HBA has. It is intended to be used for PCI devices, but the same registers can be used even when no PCI bus is physically present. The base address is programmed differently, but the relative register addressing is still the same.

    MAXI REGISTERS

    MAXI REGISTERS module provides the register functionality and allows data to cross the clock domain boundary. The register memory is made of a dual-port block RAM module, additional block RAM (used as ROM) is pre-initialized to make each bit field of the register bank RW (read/write), RO (read only), RWC (read, write 1 to clear) or RW1 (read, write 1 to set) as specified by the AHCI. Such initialization is handled by the Python program create_ahci_registers.py that also generates ahci_localparams.vh include file that provides symbolic names for addressing register fields in Verilog code of other modules and in simulation test benches. The same file runs in the camera to allow access to the hardware registers by names.

    Each write access to the register space generates write event that crosses the clock boundary and reaches the HBA logic, it is also used to start the AHCI FSM even if it is in reset state.

    The second port of the register memory operates in mclk domain and allows register reads and writes by other AHCI submodules (FIS RECEIVE – writes registers, FIS TRANSMIT and CONTROL STATUS)

    The same module also provides access to debug registers and allows reading of the “datascope” acquired data.

    CONTROL STATUS

    The control/status module maintains “live” registers/bits that the controller need to react when they are changed by the software and react on various events in the different parts of the controller. The updated register values are written to the software accessible register bank.

    This module generates interrupt request to the processor as specified in the AHCI standard. It uses one of the interrupt lines from the FPGA to the CPU (IRQF2P) available in Zynq.

    AHCI FSM

    The AHCI state machine implements the AHCI layer using programmable sequencer. Each state traverses the following two stages: actions and conditions. The first stage triggers single-cycle pulses that are distributed to appropriate modules (currently 52 total). Some actions require just one cycle, others wait for “done” response from the destination. Conditions phase involves freezing logical conditions (now 44 total) and then going through them in the order specified in AHCI documentation. The state description for the machine is provided in the Assembler-like format inside the Python program ahci_fsm_sequence.py it generates Verilog code for the action_decoder.v and condition_mux.v modules that are instantiated in the ahci_fsm.v.

    The output listing of the FSM generator is saved to ahci_fsm_sequence.lst. Debug output registers include address of the last FSM transition, so this listing can be used to locate problems during hardware testing. It is possible to update the generated FSM sequence at run time using designated as vendor-specific registers in the controller I/O space.

    FIS RECEIVE

    The FIS RECEIVE module processes incoming FIS (DMA Setup FIS, PIO Setup FIS, D2H register FIS, Set device bits FIS, unknown FIS), updates required registers and saves them in the appropriate areas of received FIS structure. For incoming data FIS it consumes just the header DWORD and redirects the rest to the D2H CCD FIFO of the DMA module. This module also implements the word counters (PRD byte count and decrementing transfer counter), these counters are shared with the transmit channel.

    FIS TRANSMIT

    FIS TRANSMIT module recognizes the following commands received from the AHCI FSM: fetch_cmd, cfis_xmit, atapi_xmit and dx_xmit, following the prefetch condition bit. The first command (fetch_cmd) requests DMA engine to read in the command table and optionally to prefetch PRD memory descriptors. The command data is read from the DMA module memory after one of the cfis_xmit or atapi_xmit comamnds, it is then transmitted to the link layer to be sent to device. When processing the dx_xmit this module sends just the header DWORD and transfers control to the DMA engine, continuing to count PRD byte count and decrementing transfer counter.

    FPGA resources used

    According to the “report_utilization” Xilinx Vivado command, current design uses:

    • 1358 (6.91%) slices
    • 9.5 (3.58%) Block RAM tiles
    • 7 (21.88%) BUFGCTRL
    • 2 (40%) PLLE2_ADV

    The resource usage will be reduced as there are debug features not yet disabled. One of the PLLE2_ADV uses clock already available in the rest of the x393 code (150MHz for MAXIGP1 and SXAHIHP3), the other PLL that produces 75MHz transmit-synchronous clock can probably be eliminated too. Two of the block RAM tiles are capturing incoming primitives and profiling data, this functionality is not needed in the production version. More the resources may be saved if we’ll be able to use the hard-wired 10b/8b decoder, 8b/10b encoder, comma alignment and elastic buffer primitives of the Xilinx GTXE2_CHANNEL.

    Update: eliminated use of the PLLE2_ADV in the SATA controller (one left is just to generate AXI clock, it is not needed with proper setting of the PS output clock), reduced number of slices (datascope functionality preserved) to 1304 (6.64%). PLLs are valuable resource for multi-sensor camera as we keep possibility to use different sensors/clocks on each sensor port.

    Testing the hardware

    Testing with Python programs

    All the initial work with the actual hardware was done with the Python script that started with reimplementation of the same functionality used when simulating the project. Most is in x393sata.py that imports x393_vsc3304.py to control the VSC3304 crosspoint switch. This option turned out very useful for troubleshooting starting from initial testing of the SSD connection (switch can route the SSD to the desktop computer), then for verifying the OOB exchange (the only what is visible on my oscilloscope) – switch was set to connect SSD to Zynq, and use eSATA connector pins to duplicate signals between devices, so probing did not change the electrical characteristics of the active lines. Python program allowed to detect communication errors, modify GTX attributes over DRP, capture incoming data to reproduce similar conditions with the simulator. Step-by-step it was possible to receive signature FIS, then get then run the identify command. In these tests I used large area of the system memory that was reserved as a video ring buffer set up as “coherent” DMA memory. We were not able to make it really “coherent” – the command data transmitted to the device (controller reads it from the system memory as a master) often contained just zeros as the real data written by the CPU got stuck in either one of the caches or in the DDR memory controller write buffer. These errors only went away when we abandoned the use of the coherent memory allocation and switched to the stream DMA with explicit synchronization with dma_sync_*_for_cpu/dma_sync_*_for_device.

    AHCI driver for GNU/Linux

    Mikhail Karpenko is preparing post about the software driver, and as expected this development stage revealed new controller errors that were not detected with just manual launching commands through the Python program. When we mounted the SSD and started to copy gigabyte files, the controller reported some fake CRC errors. And it happened with one SSD, but not with the other. Using data capturing modules it was not so difficult to catch the conditions that caused errors and then reproduce them with the simulator – one of the last bugs detected was that link layer incorrectly handled single incoming HOLD primitives (rather unlikely condition).

    Performance results

    First performance testing turned out to be rather discouraging – ‘dd’ reported under 100 MB/s rate. At that point I added profiling code to the controller, and the data rate for the raw transfers (I tried command that involved reading of 24 of the 8KB FISes), measured from the sending of the command FIS to the receiving of the D2H register FIS confirming the transfer was 198MB/s – about 80% of the maximal for the SATA2. Profiling the higher levels of the software we noticed that there is virtually no overlap between the hardware and software operation. It is definitely possible to improve the result, but the fact that the software slowed twice the operation tells that it if the requests and their processing were done in parallel, it will consume 100% of the CPU power. Yes, there are two cores and the clock frequency can be increased (the current boards use the speed grade 2 Zynq, while the software still thinks it is speed grade 1 for compatibility with the first prototype), it still may be a big waste in the camera. So we will likely bypass the file system for sequential recording video/images and use the second partition of the SSD for raw recording, especially as we will record directly from the video buffer of the system memory, so no dealing with scatter-gather descriptors, and no need to synchronize system memory as no cache is involved. The memory controller is documented as being self-coherent, so reading the same memory while it is being written to through a different channel should cause write operation to be performed first.

    Conclusions and future plans

    We’ve achieved the useful functionality of the camera SATA controller allowing recording to the internal high capacity m.2 SSD, so all the hardware is tested and cameras can be shipped to the users. The future upgrades (including SATA3) will be released in the same way as other camera software. On the software side we will first need to upgrade our camogm recorder to reduce CPU usage during recording and provide 100% load to the SATA controller (rather easy when recording continuous memory buffer). Later (it will be more important after SATA3 implementation) we may optimize controller even more try to short-cut the video compressors outputs directly to the SATA controller, using the system memory as a buffer only when the SSD is not ready to receive data (they do take “timeouts”).

    We hope that this project will be useful for other developers who are interested in Free Software solutions and prefer the Real Verilog Code (RVC) to all those “wizards”, “black boxes” and “IP”.

    Software tools used (and not)

    Elphel designs and builds high performance cameras striving to provide our users/developers with the design freedom at every possible level. We do not use any binary-only modules or other hidden information in our designs – all what we know ourselves is posted online – usually on GitHub and Elphel Wiki. When developing FPGA, and that unfortunately still depends on proprietary tools, we limit ourselves to use only free for download tools to be exactly in the same position as many of our users. We can not make it necessary for the users (and consider it immoral) to purchase expensive tools to be able to modify the free software code for the hardware they purchased from Elphel, so no “Chipscopes” or other fancy proprietary tools were used in this project development.

    Keeping information free is a precondition, but it is not sufficient alone for many users to be able to effectively develop new functionality to the products, there needs to be ease of doing that. In the area of the FPGA design (and it is a very powerful tool resulting in high performance that is not possible with just software applications) we think of our users as smart people, but not necessarily professional FPGA developers. Like ourselves.

    Fig.3 FPGA development with VDT

    Fig.3 FPGA development with VDT

    We learned a lesson from our previous FPGA projects that depended too much on particular releases of Xilinx tools and were difficult to maintain even for ourselves. Our current code is easier to use, port and support, we tried to minimize dependence on particular tools used what we think is a better development environment. I believe that the “Lego blocks” style is not the the most productive way to develop the FPGA projects, and it is definitely not the only one possible.

    Treating HDL code similar to the software one is not less powerful paradigm, and to my opinion the development tools should not pretend to be “wizards” who know better than me what I am allowed (or not allowed) to do, but more like gentle secretaries or helpers who can take over much of routine work, remind about important events and provide some appropriate suggestions (when asked for). Such behavior is even more important if the particular activity is not the only one you do and you may come back to it after a long break. A good IDE should be like that – help you to navigate the code, catch problems early, be useful with default settings but provide capabilities to fine tune the functionality according to the personal preferences. It is also important to provide familiar environment, this is why we use the same Eclipse IDE for Verilog, Python, C/C++ and Java and more. All our projects come with the initial project settings files that can be imported in this IDE (supplemented by the appropriate plugins) so you can immediately start development from the point we currently left it.

    For FPGA development Elphel provides VDT – a powerful tool that includes deep Verilog support and integrates free software simulator Icarus Verilog with the Github repository and a popular GTKWave for visualizing simulation results. It comes with the precofigured support of FPGA vendors proprietary synthesis and implementation tools and allows addition of other tools without requirement to modify the plugin code. The SATA project uses Xilinx Vivado command line tools (not Vivado GUI), support for several other FPGA tools is also available.

    by andrey at March 12, 2016 11:14 PM

    ZeptoBARS

    GD32F103CBT6 - Cortex-M3 with serial flash : weekend die-shot

    Giga Devices GD32F103CBT6 really surprised us:



    Giga Devices was a serial flash manufacturer for quite some time. When they launched their ARM Cortex M3 lineup (with some level of binary compatibility to STM32) - instead of going conventional route of making numerous dies with different flash and SRAM sizes they went for SRAM&logic die and separate serial flash die. How this could work fast enough? Keep reading :-) At least ESP8266 already taught us that executing code from serial flash and reaching acceptable speed is not impossible.

    Use of serial flash allows Giga Devices to increase maximum flash size in their microcontrollers quite a bit (currently they have up to 3MiB) and to save quite a bit on ARM licensing fees (if they are paying "per die design").


    Die has 110 pads, 9 of which are used by a flash die. GD32F103CBT6 is in TQFP48 package - which again suggests that this die is universal and also used in higher pin count models. Die size 2889x3039 µm.

    Logo:


    ADC capacitor bank:


    After etching to poly level we clearly see that there is no flash on the die:


    SRAM sizes are 32KiB in each largest block (128 KiB total) - stores code, which means first 128KiB could be accessed faster than typical flash. GD32 chips with 20Kb of SRAM or less have no more than 128KiB of flash, so all flash content is served from SRAM. This might also mean that startup time is slower than one would expect. With this SRAM mirroring it is not surprising that GD32 is beating STM32 in performance even on the same frequency and loosing in idle & sleep power consumption. Consumption at full load is lower than STM32 due to better (smaller) manufacturing technology.

    2 smaller blocks are 10KiB each and are likely to be user-accessible SRAM.
    4 smallest blocks closest to the synthesized logic are 512B each.

    SRAM has cell size 2.04 µm², which is ~110nm. Scale 1px = 57nm:


    Standard cells:


    Low power standard cells:


    Flash die:

    Flash die size: 1565x1378 µm.

    PS. Thanks for the chips go to dongs from irc.

    March 12, 2016 12:00 PM

    March 08, 2016

    Harald Welte

    Linaro Connect BKK16 Keynote on GPL Compliance

    Today I had the pleasure of co-presenting with Shane Coughlan the Linaro Connect BKK16 Keynote on GPL compliance about GPL compliance.

    The main topics addressed by this presentation are:

    • Brief history about GPL enforcement and how it has impacted the industry
    • Ultimate Goal of GPL enforcement is compliance
    • The license is not an end in itself, but rather to facilitate collaborative development
    • GPL compliance should be more engineering and business driven, not so much legal (compliance) driven.

    The video recording is available at https://www.youtube.com/watch?v=b4Bli8h0V-Q

    Slides are available at http://git.gnumonks.org/index.html/laforge-slides/plain/2016/linaroconnect/compliance.html

    The video of a corresponding interview is available from https://www.youtube.com/watch?v=I6IgjCyO-iQ

    by Harald Welte at March 08, 2016 11:00 PM

    March 03, 2016

    Free Electrons

    Free Electrons at the Embedded Linux Conference 2016

    Like every year for about 10 years, the entire Free Electrons engineering team will participate to the next Embedded Linux Conference, taking place on April 4-6 in San Diego, California. For us, participating to such conferences is very important, as it allows to remain up to date with the latest developments in the embedded Linux world, create contacts with other members of the embedded Linux community, and meet the community members we already know and work with on a daily basis via the mailing lists or IRC.

    Embedded Linux Conference 2016

    Over the years, our engineering team has grown, and with the arrival of two more engineers on March 14, our engineering team now gathers 9 persons, all of whom are going to participate to the Embedded Linux Conference.

    As usual, in addition to attending, we also proposed a number of talks, and some of them have been accepted and are visible in the conference schedule:

    As usual, our talks are centered around our areas of expertise: hardware support in the Linux kernel, especially for ARM platforms, and build system related topics (Buildroot, Yocto, autotools).

    We are looking forward to attending this event, and see many other talks from various speakers: the proposed schedule contains a wide range of topics, many of which look really interesting!

    by Thomas Petazzoni at March 03, 2016 01:49 PM

    February 24, 2016

    Harald Welte

    Report from the VMware GPL court hearing

    Today, I took some time off to attend the court hearing in the GPL violation/infringement case that Christoph Hellwig has brought against VMware.

    I am not in any way legally involved in the lawsuit. However, as a fellow (former) Linux kernel developer myself, and a long-term Free Software community member who strongly believes in the copyleft model, I of course am very interested in this case - and of course in an outcome in favor of the plaintiff. Nevertheless, the below report tries to provide an un-biased account of what happened at the hearing today, and does not contain my own opinions on the matter. I can always write another blog post about that :)

    I blogged about this case before briefly, and there is a lot of information publicly discussed about the case, including the information published by the Software Freedom Conservancy (see the link above, the announcement and the associated FAQ.

    Still, let's quickly summarize the facts:

    • VMware is using parts of the Linux kernel in their proprietary ESXi product, including the entire SCSI mid-layer, USB support, radix tree and many, many device drivers.
    • as is generally known, Linux is licensed under GNU GPLv2, a copyleft-style license.
    • VMware has modified all the code they took from the Linux kernel and integrated them into something they call vmklinux.
    • VMware has modified their proprietary virtualization OS kernel vmkernel with specific API/symbol to interact with vmklinux
    • at least in earlier versions of ESXi, virtually any block device access has to go through vmklinux and thus the portions of Linux they took
    • vmklinux and vmkernel are dynamically linked object files that are linked together at run-time
    • the Linux code they took runs in the same execution context (address space, stack, control flow) like the vmkernel.

    Ok, now enter the court hearing of today.

    Christoph Hellwig was represented by his two German Lawyers, Dr. Till Jaeger and Dr. Miriam Ballhausen. VMware was represented by three German lawyers lead by Matthias Koch, as well as a US attorney, Michael Jacobs (by means of two simultaneous interpreters). There were also several members of the in-house US legal team of VMware present, but not formally representing the defendant in court.

    As is unusual for copyright disputes, there was quite some audience following the court. Next to the VMware entourage, there were also a couple of fellow Linux kernel developers as well as some German IT press representatives following the hearing.

    General Introduction of the presiding judge

    After some formalities (like the question whether or not a ',' is missing after the "Inc." in the way it is phrased in the lawsuit), the presiding judge started with some general remarks

    • the court is well aware of the public (and even international public) interest in this case
    • the court understands there are novel fundamental legal questions raised that no court - at least no German court - had so far to decide upon.
    • the court also is well aware that the judges on the panel are not technical experts and thus not well-versed in software development or computer science. Rather, they are a court specialized on all sorts of copyright matters, not particularly related to software.
    • the court further understands that Linux is a collaborative, community-developed operating system, and that the development process is incremental and involves many authors.
    • the court understands there is a lot of discussion about interfaces between different programs or parts of a program, and that there are a variety of different definitions and many interpretations of what interfaces are

    Presentation about the courts understanding of the subject matter

    The presiding judge continued to explain what was their understanding of the subject matter. They understood VMware ESXi serves to virtualize a computer hardware in order to run multiple copies of the same or of different versions of operating systems on it. They also understand that vmkernel is at the core of that virtualization system, and that it contains something called vmkapi which is an interface towards Linux device drivers.

    However, they misunderstood that this case was somehow an interface between a Linux guest OS being virtualized on top of vmkernel. It took both defendant and plaintiff some time to illustrate that in fact this is not the subject of the lawsuit, and that you can still have portions of Linux running linked into vmkernel while exclusively only virtualizing Windows guests on top of vmkernel.

    The court went on to share their understanding of the GPLv2 and its underlying copyleft principle, that it is not about abandoning the authors' rights but to the contrary exercising copyright. They understood the license has implications on derivative works and demonstrated that they had been working with both the German translation a well as the English language original text of GPLv2. At least I was sort-of impressed by the way they grasped it - much better than some of the other courts that I had to deal with in the various cases I was bringing forward during my gpl-violations.org work before.

    They also illustrated that they understood that Christoph Hellwig has been developing parts of the Linux kernel, and that modified parts of Linux were now being used in some form in VMware ESXi.

    After this general introduction, there was the question of whether or not both parties would still want to settle before going further. The court already expected that this would be very unlikely, as it understood that the dispute serves to resolve fundamental legal question, and there is hardly any compromise in the middle between using or not using the Linux code, or between licensing vmkernel under a GPL compatible license or not. And as expected, there was no indication from either side that they could see an out-of-court settlement of the dispute at this point.

    Right to sue / sufficient copyrighted works of the plaintiff

    There was quite some debate about the question whether or not the plaintiff has shown that he actually holds a sufficient amount of copyrighted materials.

    The question here is not, whether Christoph has sufficient copyrightable contributions on Linux as a whole, but for the matter of this legal case it is relevant which of his copyrighted works end up in the disputed product VMware ESXi.

    Due to the nature of the development process where lots of developers make intermittent and incremental changes, it is not as straight-forward to demonstrate this, as one would hope. You cannot simply print an entire C file from the source code and mark large portions as being written by Christoph himself. Rather, lines have been edited again and again, were shifted, re-structured, re-factored. For a non-developer like the judges, it is therefore not obvious to decide on this question.

    This situation is used by the VMware defense in claiming that overall, they could only find very few functions that could be attributed to Christoph, and that this may altogether be only 1% of the Linux code they use in VMware ESXi.

    The court recognized this as difficult, as in German copyright law there is the concept of fading. If the original work by one author has been edited to an extent that it is barely recognizable, his original work has faded and so have his rights. The court did not state whether it believed that this has happened. To the contrary, the indicated that it may very well be that only very few lines of code can actually make a significant impact on the work as a whole. However, it is problematic for them to decide, as they don't understand source code and software development.

    So if (after further briefs from both sides and deliberation of the court) this is still an open question, it might very well be the case that the court would request a techncial expert report to clarify this to the court.

    Are vmklinux + vmkernel one program/work or multiple programs/works?

    Finally, there was some deliberation about the very key question of whether or not vmkernel and vmklinux were separate programs / works or one program / work in the sense of copyright law. Unfortunately only the very surface of this topic could be touched in the hearing, and the actual technical and legal arguments of both sides could not be heard.

    The court clarified that if vmkernel and vmklinux would be considered as one program, then indeed their use outside of the terms of the GPL would be an intrusion into the rights of the plaintiff.

    The difficulty is how to actually venture into the legal implications of certain technical software architecture, when the people involved have no technical knowledge on operating system theory, system-level software development and compilers/linkers/toolchains.

    A lot is thus left to how good and 'believable' the parties can present their case. It was very clear from the VMware side that they wanted to down-play the role and proportion of vmkernel and its Linux heritage. At times their lawyers made statements like linux is this small yellow box in the left bottom corner (of our diagram). So of course already the diagrams are drawn in a way to twist the facts according to their view on reality.

    Summary

    • The court seems very much interested in the case and wants to understand the details
    • The court recognizes the general importance of the case and the public interest in it
    • There were some fundamental misunderstandings on the technical architecture of the software under dispute that could be clarified
    • There are actually not that many facts that are disputed between both sides, except the (key, and difficult) questions on
      • does Christoph hold sufficient rights on the code to bring forward the legal case?
      • are vmkernel and vmklinux one work or two separate works?

    The remainder of this dispute will thus be centered on the latter two questions - whether in this court or in any higher courts that may have to re-visit this subject after either of the parties takes this further, if the outcome is not in their favor.

    In terms of next steps,

    • both parties have until April 15, 2016 to file further briefs to follow-up the discussions in the hearing today
    • the court scheduled May 19, 2016 as date of promulgation. However, this would of course only hold true if the court would reach a clear decision based on the briefs by then. If there is a need for an expert, or any witnesses need to be called, then it is likely there will be further hearings and no verdict will be reached by then.

    by Harald Welte at February 24, 2016 11:00 PM

    February 23, 2016

    Harald Welte

    Software under OSA Public License is neither Open Source nor Free Software

    It seems my recent concerns on the OpenAirInterface re-licensing were not unjustified.

    I contacted various legal experts on Free Software legal community about this, and the response was unanimous: In all feedback I received, the general opinion was that software under the OSA Public License V1.0 is neither Free Software nor Open Source Software.

    The rational is, that it does not fulfill the criteria of

    • the FSF Free Software definition, as the license does not fulfill freedom 0: The freedom to run the program as you wish, for any purpose (which obviously includes commercial use)
    • the Open Source Initiatives Open Source Definition, as the license must not discriminate against fields of endeavor, such as commercial use.
    • the Debian Free Software Guidelines, as the DFSG also require no discrimination against fields of endeavor, such as commercial use.

    I think we as the community need to be very clear about this. We should not easily tolerate that people put software under restrictive licenses but still call that software open source. This creates a bad impression to those not familiar with the culture and spirit of both Free Software and Open Source. It creates the impression that people can call something Open Source but then still ask royalties for it, if used commercially.

    It is a shame that entities like Eurecom and the OpenAirInterface Software Association are open-washing their software by calling it Open Source when in fact it isn't. This attitude frankly makes me sick.

    That's just like green-washing when companies like BP are claiming they're now an environmental friendly company just because they put some solar panels on the roof of some building.

    by Harald Welte at February 23, 2016 11:00 PM

    Bunnie Studios

    The Story Behind the Cover for The Essential Guide to Electronics in Shenzhen

    First, I want to say wow! I did not expect such a response to this book. When preparing for the crowdfunding campaign, I modeled several scenarios, and none of them predicted an outcome like this.

    The Internet has provided fairly positive feedback on the cover of the book. I’m genuinely flattered that people like how it turned out. There’s actually an interesting story behind the origins of the book cover, which is the topic of this post.

    It starts with part of a blog post series I did a while back, “The Factory Floor, Part 3 of 4: Industrial Design for Startups”. In that post, I outline a methodology for factory-aware design, and I applied these methods when designing my book cover. In particular, step 3 & 4 read:

    3. Visit the facility, and take note of what is actually running down the production lines. … Practice makes perfect, and from the operators to the engineers they will do a better job of executing things they are doing on a daily basis than reaching deep and exercising an arcane capability.

    4. Re-evaluate the design based on a new understanding of what’s possible, and iterate.

    My original cover design was going to be fairly conventional – your typical cardboard laminated in four color printing, or perhaps even a soft cover, and the illustration was to be done by the same fellow who did the cute bunny pictures that preface each chapter, Miran Lipovača.

    But, as a matter of practicing what I preach, I made a visit to the printing factory to see what was running down its lines. They had all manners of processes going on in the factory, from spine stitching to die cutting and lamination.


    Chibitronics’ Circuit Sticker Sketchbook is also printed at this factory

    One process in particular caught my eye – in the back, there was a room full of men using belt sanders with varying grits of sand paper to work the edges of books until they were silky smooth. Next to that was a hot foil transfer machine – through heat and pressure, it can apply a gold (or any other color) foil to the surface of paper. In this case, they were gilding the edges of books, in a style similar to that found on fancy bibles and prayer books. They could also use the same process to do a foil deboss on cardboard.


    Beltsanding the edges of a stack of books until they are silky smooth


    Closeup of the hot foil transfer mechanism


    Stacks of books with gleaming, gilded edges

    This is when I got the idea for the cover. These gilded books looked beautiful – and because the process is done in-house, I knew I could get it for a really good price. So, I went back to the drawing board and thought about what would look good using this process. The first idea was to take the bunny picture, and adapt it for the gold foil process. Unfortunately, the bunny illustrations relied heavily upon halftone grays, something which wouldn’t translate well into a gold foil process. Someone else suggested that perhaps I should do a map of China, with Shenzhen marked and some pictures of components around it. I didn’t like it for a number of reasons, the first one being the headache of securing the copyright to a decent map of China that was both geographically accurate and politically correct.

    So I did a Google image search for “gold leaf covers” just to see what’s out there. The typical motif I observed was some kind of filigree, typically with at least left/right symmetry, if not also up/down symmetry.

    I thought maybe I’d go and fire up Adobe Illustrator and start sketching some filigree patterns, but quickly gave up on that idea – it was a lot of work, and I’m not entirely comfortable with that tool. Then it hit upon me that individual PCB layers have the same sort of intricacy as a filligree – and I live and breathe PCB design.

    So, I started up my favorite PCB design package, Altium. I tried playing around a bit with the polygon fill function, using its hashing feature and adjusting the design rules to see if I couldn’t make a decent filigree with it. The effect seemed reasonable, especially when I used a fairly coarse fill and an additional design rule that caused polygon fills to keep a wide berth around any traces.

    Then I had to come up with some circuitry to fill the cover. I looked at a few of my circuit boards, and in reality, few practical circuits had the extreme level of symmetry I was looking for. So I went ahead and cocked up a fake circuit on the fly. I made a QFN footprint based on fictional design rules that would look good, and sorted through my library of connector footprints for ones that had large enough pads to print reasonably well using the foil transfer process. I found a 2.4GHz antenna and some large-ish connectors.

    I then decided upon a theme – generally, I wanted the book to go from RF on the bottom to digital on the top. So I started by drawing the outline of an A5 page, and putting a couple lines of symmetry down. In the lower left, I placed the 2.4 GHz antenna, and then coupled it to a QFN in a semi-realistic fashion, throwing a couple of capacitors in for effect. I added an SMA connector that spanned the central symmetry line, and then an HRS DF-11 connector footprint above it. I decided in the RF section I’d make extensive use of arcs in the routing, calling upon a motif quite common in RF design and visually distinct from digital routing. Next I added a SATA connector off to the middle edge, and routed a set of differential pairs to the TX/RX pads, to which I applied the trace length equalization feature of the PCB tool to make them wavy – just for added aesthetic effect.

    Then I started from the top left and designed the digital section. Nothing says “old school digital” to me louder than a DB-9 connector (and yes, you pedants, it’s technically a DE-9, but in my heart it will always be a DB-9), so I plopped one of those down up top. I decided I’d spice things up a bit by throwing series termination resistors between the connector and a fake QFN IC; yes, in reality, not all pins would have these, but I thought it looked more aesthetic to put it on all the pins. Then, I routed signals from the QFN as a bus, this time using 45 degree angles, to a 14-pin JTAG connector which I placed in the heart of the book. Everything starts and ends with the JTAG connector these days, so why not?

    The design now occupied just the left half of the board. I copied it, flipped it, and pasted it to create a perfect 2-fold symmetry around the vertical axis.

    Around all of this, I put a border with fiducials and gutters, the same as you would find in a PCB destined for production in an automated SMT line. You’ll notice I break symmetry by making the top right fiducial a square, not a circle; this is a hallmark feature of fiducials, since their purpose is to both align the vision recognition systems and determine if the PCB has been loaded into the machine correctly.

    Finally, I added the book title and author using Altium’s TrueType string facility, and ran an automated fill of the empty space to create the filigree.

    I actually designed the whole cover while I was on the long flight from Hong Kong to Amsterdam for 32C3. I find that airplane flights are excellent for doing PCB routing and design work like this, free of any distractions from the Internet. As a bonus, every now and then someone comes along and feeds you and tops up your glass of wine, allowing your creative streak to be unbroken by concerns about hunger or sobriety.

    When viewed in black and white, the book cover honestly looks a little “meh” – when I first saw it, I was thought, “well, at least maybe the geeks will appreciate it”. But after seeing the faux-linen with gold foil transfer sample, I knew this was the design I would run with for production.

    The next difficult challenge was to not paint legs on the metaphorical snake. As an engineer, I disliked how over-simplified the design was. There really should be bypass capacitors around the digital components. And SATA requires series DC blocking caps. But I had to let all that go, set it aside, and stop looking at it as a design, and let it live its own life as the cover of a book.

    And so there you have it – the story behind perhaps the only book cover designed using Altium (if you have a gerber viewer, you can check out the gerber files). The design went from a .PcbDoc file, to a .DXF, to .AI, and finally placed in a .INDD – not your typical progression of file formats, but in the end, it was fun and worthwhile figuring it all out.

    Thanks again to everyone who helped promote and fund my book. I’m really excited to get started on the print run. The problem I’m facing now is I don’t know how many to print. Originally, I was fairly certain no matter what, I would just barely hit the minimum order quantity (MOQ) of 1,000 books. Now that the campaign has blown past that, I have to wait until the campaign finishes in 23 days before I know what to put on the purchase order to the manufacturer. And, shameless plug – if you’re interested in the book, it’s $5 cheaper if you back during the campaign, so consider getting your order in before the prices go up.

    by bunnie at February 23, 2016 06:07 PM

    February 22, 2016

    Bunnie Studios

    Name that Ware, February 2016

    The Ware for February 2016 is shown below.

    I couldn’t bring myself to blemish this beautiful ware by pixelating all of the part numbers necessary to make this month’s game a real challenge. Instead, I just relied upon a strategic cropping to remove the make and model number from the lower left corner of the board.

    Remember the TMS4464? Yah, back when TI’s thing was making DRAM, not voltage regulators, and when Foxconn made connectors, not iPhones. Somewhere along the way, some business guy coined the term “pivot” to describe such changes in business models.

    Thanks to Michael Steil for sharing this beautiful piece of history with me at 32C3!

    by bunnie at February 22, 2016 08:02 AM

    Winner, Name that Ware January 2016

    The Ware for January 2016 was a TPI model 342 water resistant, dual-input type K&J thermocouple thermometer. Picking a winner was tough. Eric Hill was extremely close on guessing the model number — probably the only difference between the TPI 343 and the 342 is a firmware change and perhaps the button that lets you pick between K/J type thermocouples, neither of which would be obvious from the image shown.

    However, I do have to give kudos to CzajNick for pointing out that the MCU in this is a 4-bit microcontroller. Holy shit, I didn’t know they made those anymore, much less be useful for anything beyond a calculator. This is probably the only functional 4-bit machine that I have in my lab. All of a sudden this thermometer got a little bit cooler in my mind. He also correctly identified the ware as some type of double-input thermocouple thermometer in the course of his analysis.

    Despite not citing a specific make/model, I really appreciated the analysis, especially the factoid about this having a 4-bit microcontroller, so I’ll declare CzajNick the winner. Congrats and email me for your prize!

    Also, I’ll have to say, after tearing apart numerous pieces of shoddy Chinese test equipment to fix stupid problems in them, it was a real sight for sore eyes to see such a clean design with high quality, brand-name components. I guess this is 90’s-vintage Korean engineering for you — a foreshadowing of the smartphone onslaught to come out of the same region a decade later.

    by bunnie at February 22, 2016 08:02 AM

    February 20, 2016

    Harald Welte

    Osmocom.org migrating to redmine

    In 2008, we started bs11-abis, which was shortly after renamed to OpenBSC. At the time it seemed like a good idea to use trac as the project management system, to have a wiki and an issue tracker.

    When further Osmocom projects like OsmocomBB, OsmocomTETRA etc. came around, we simply replicated that infrastructure: Another trac instance with the same theme, and a shared password file.

    The problem with this (and possibly the way we used it) is:

    • it doesn't scale, as creating projects is manual, requires a sysadmin and is time-consuming. This meant e.g. SIMtrace was just a wiki page in the OsmocomBB trac installation + associated http redirect, causing some confusion.
    • issues can not easily be moved from one project to another, or have cross-project relationships (like, depend on an issue in another project)
    • we had to use an external planet in order to aggregate the blog of each of the trac instances
    • user account management the way we did it required shell access to the machine, meaning user account applications got dropped due to the effort involved. My apologies for that.

    Especially the lack of being able to move pages and tickets between trac's has resulted in a suboptimal use of the tools. If we first write code as part of OpenBSC and then move it to libosmocore, the associated issues + wiki pages should be moved to a new project.

    At the same time, for the last 5 years we've been successfully using redmine inside sysmocom to keep track of many dozens of internal projects.

    So now, finally, we (zecke, tnt, myself) have taken up the task to migrate the osmocom.org projects into redmine. You can see the current status at http://projects.osmocom.org/. We could create a more comprehensive project hierarchy, and give libosmocore, SIMtrace, OsmoSGSN and many others their own project.

    Thanks to zecke for taking care of the installation/sysadmin part and the initial conversion!

    Unfortunately the conversion from trac to redmine wiki syntax (and structure) was not as automatic and straight-forward as one would have hoped. But after spending one entire day going through the most important wiki pages, things are looking much better now. As a side effect, I have had a more comprehensive look into the history of all of our projects than ever before :)

    Still, a lot of clean-up and improvement is needed until I'm happy, particularly splitting the OpenBSC wiki into separate OsmoBSC, OsmoNITB, OsmoBTS, OsmoPCU and OsmoSGSN wiki's is probably still going to take some time.

    If you would like to help out, feel free to register an account on projects.osmocom.org (if you don't already have one from the old trac projects) and mail me for write access to the project(s) of your choice.

    Possible tasks include

    • putting pages into a more hierarchic structure (there's a parent/child relationship in redmine wikis)
    • fixing broken links due to page renames / wiki page moves
    • creating a new redmine 'Project' for your favorite tool that has a git repo on http://git.osmocom.org/ and writing some (at least initial) documentation about it.

    You don't need to be a software developer for that!

    by Harald Welte at February 20, 2016 11:00 PM

    February 19, 2016

    Harald Welte

    Some update on recent OsmoBTS changes

    After quite some time of gradual bug fixing and improvement, there have been quite some significant changes being made in OsmoBTS over the last months.

    Just a quick reminder: In Fall 2015 we finally merged the long-pending L1SAP changes originally developed by Jolly, introducing a new intermediate common interface between the generic part of OsmoBTS, and the hardware/PHY specific part. This enabled a clean structure between osmo-bts-sysmo (what we use on the sysmoBTS) and osmo-bts-trx (what people with general-purpose SDR hardware use).

    The L1SAP changes had some fall-out that needed to be fixed, not a big surprise with any change that big.

    More recently however, three larger changes were introduced:

    proper Multi-TRX support

    Based on the above phy_link/phy_instance infrastructure, one can map each phy_instance to one TRX by means of the VTY / configuration file.

    The core of OsmoBTS now supports any number of TRXs, leading to flexible Multi-TRX support.

    OCTPHY support

    A Canadian company called Octasic has been developing a custom GSM PHY for their custom multi-core DSP architecture (OCTDSP). Rather than re-inventing the wheel for everything on top of the PHY, they chose to integrate OsmoBTS on top of it. I've been working at sysmocom on integrating their initial code into OsmoBTS, rendering a new osmo-bts-octphy backend.

    This back-end has also recently been ported to the phy_link/phy_instance API and is Multi-TRX ready. You can both run multiple TRX in one DSP, as well as have multiple DSPs in one BTS, paving the road for scalability.

    osmo-bts-octphy is now part of OsmoBTS master.

    Corresponding changes to OsmoPCU (for full GPRS support on OCTPHY) are currently been worked on by Max at sysmocom.

    Litecell 1.5 PHY support

    Another Canadian company (Nutaq/Nuran) has been building a new BTS called Litecell 1.5. They also implemented OsmoBTS support, based on the osmo-bts-sysmo code. We've been able to integrate that code with the above-mentioned phy_link/phy_interface in order to support the MultiTRX capability of this hardware.

    Litecell 1.5 MultiTRX capability has also been integrated with OsmoPCU.

    osmo-bts-litecell15 is now part of OsmoBTS master.

    Summary

    • 2016 starts as the OsmoBTS year of MultiTRX.
    • 2016 also starts as a year of many more hardware choices for OsmoBTS
    • we see more commercial adoption of OsmoBTS outside of the traditional options of sysmocom and Fairwaves

    by Harald Welte at February 19, 2016 11:00 PM

    February 18, 2016

    Free Electrons

    Free Electrons speaking at the Linux Collaboration Summit

    Free Electrons engineers are regular speakers at the Embedded Linux Conference and Embedded Linux Conference Europe events from the Linux Foundation, to which our entire engineering team participates each year.

    In 2016, for the first time, we will also be speaking at the Collaboration Summit, an invitation-only event where, as the Linux Foundation presents it, “the world’s thought leaders in open source software and collaborative development convene to share best practices and learn how to manage the largest shared technology investments of our time”.

    Collaboration Summit 2016

    This event will take place on March 29-31 in Lake Tahoe, California, and the event schedule has been published recently. Free Electrons CTO Thomas Petazzoni will be giving a talk

    Upstreaming hardware support in the Linux kernel: why and how?, during which we will share our experience working with HW manufacturers to bring the support for their hardware to the upstream Linux kernel, discuss the benefits of upstreaming, and best practices to work with upstream.

    With a small team of engineers, Free Electrons has merged over the last few years thousands of patches in the official Linux kernel, and has several of its engineers having maintainer positions in the Linux kernel community. We are happy to take the opportunity of the Collaboration Summit to share some of our experience, and hopefully encourage and help other companies to participate upstream.

    by Thomas Petazzoni at February 18, 2016 04:16 PM

    February 15, 2016

    Free Electrons

    Initial support for ARM64 Marvell Armada 7K/8K platform

    Two weeks ago, we submitted the initial support for the Marvell Armada 3700, which was the first ARM64 platform that Free Electrons engineers contributed to the upstream Linux kernel.

    Today, we submitted initial support for another Marvell ARM64 platform, the Armada 7K and Armada 8K platform. Compared to the Armada 3700, the Armada 7K and 8K are much more on the high-end side: they use a dual Cortex-A72 or a quad Cortex-A72, as opposed to the Cortex-A53 for the Armada 3700.

    Marvell Armada 7KMarvell Armada 8K

    The Armada 7K and 8K also use a fairly unique architecture, internally they are composed of several components:

    • One AP (Application Processor), which contains the processor itself and a few core hardware blocks. The AP used in the Armada 7K and 8K is called AP806, and is available in two configurations: dual Cortex-A72 and quad Cortex-A72.
    • One or two CP (Communication Processor), which contain most of the I/O interfaces (SATA, PCIe, Ethernet, etc.). The 7K family chips have one CP, while the 8K family chips integrate two CPs, providing two times the number of I/O interfaces available in the CP. The CP used in the 7K and 8K is called CP110.

    All in all, this gives the following combinations:

    • Armada 7020, which is a dual Cortex-A72 with one CP
    • Armada 7040, which is a quad Cortex-A72 with one CP
    • Armada 8020, which is a dual Cortex-A72 with two CPs
    • Armada 8040, which is a quad Cortex-A72 with two CPs

    So far, we submitted initial support only for the AP806 part of the chip, with the following patch series:

    We will continue to submit more and more patches to support other features of the Armada 7K and 8K processors in the near future.

    by Thomas Petazzoni at February 15, 2016 11:02 AM

    Factory flashing with U-Boot and fastboot on Freescale i.MX6

    Introduction

    For one of our customers building a product based on i.MX6 with a fairly low-volume, we had to design a mechanism to perform the factory flashing of each product. The goal is to be able to take a freshly produced device from the state of a brick to a state where it has a working embedded Linux system flashed on it. This specific product is using an eMMC as its main storage, and our solution only needs a USB connection with the platform, which makes it a lot simpler than solutions based on network (TFTP, NFS, etc.).

    In order to achieve this goal, we have combined the imx-usb-loader tool with the fastboot support in U-Boot and some scripting. Thanks to this combination of a tool, running a single script is sufficient to perform the factory flashing, or even restore an already flashed device back to a known state.

    The overall flow of our solution, executed by a shell script, is:

    1. imx-usb-loader pushes over USB a U-Boot bootloader into the i.MX6 RAM, and runs it;
    2. This U-Boot automatically enters fastboot mode;
    3. Using the fastboot protocol and its support in U-Boot, we send and flash each part of the system: partition table, bootloader, bootloader environment and root filesystem (which contains the kernel image).
    The SECO uQ7 i.MX6 platform used for our project.

    The SECO uQ7 i.MX6 platform used for our project.

    imx-usb-loader

    imx-usb-loader is a tool written by Boundary Devices that leverages the Serial Download Procotol (SDP) available in Freescale i.MX5/i.MX6 processors. Implemented in the ROM code of the Freescale SoCs, this protocol allows to send some code over USB or UART to a Freescale processor, even on a platform that has nothing flashed (no bootloader, no operating system). It is therefore a very handy tool to recover i.MX6 platforms, or as an initial step for factory flashing: you can send a U-Boot image over USB and have it run on your platform.

    This tool already existed, we only created a package for it in the Buildroot build system, since Buildroot is used for this particular project.

    Fastboot

    Fastboot is a protocol originally created for Android, which is used primarily to modify the flash filesystem via a USB connection from a host computer. Most Android systems run a bootloader that implements the fastboot protocol, and therefore can be reflashed from a host computer running the corresponding fastboot tool. It sounded like a good candidate for the second step of our factory flashing process, to actually flash the different parts of our system.

    Setting up fastboot on the device side

    The well known U-Boot bootloader has limited support for this protocol:

    The fastboot documentation in U-Boot can be found in the source code, in the doc/README.android-fastboot file. A description of the available fastboot options in U-Boot can be found in this documentation as well as examples. This gives us the device side of the protocol.

    In order to make fastboot work in U-Boot, we modified the board configuration file to add the following configuration options:

    #define CONFIG_CMD_FASTBOOT
    #define CONFIG_USB_FASTBOOT_BUF_ADDR       CONFIG_SYS_LOAD_ADDR
    #define CONFIG_USB_FASTBOOT_BUF_SIZE          0x10000000
    #define CONFIG_FASTBOOT_FLASH
    #define CONFIG_FASTBOOT_FLASH_MMC_DEV    0
    

    Other options have to be selected, depending on the platform to fullfil the fastboot dependencies, such as USB Gadget support, GPT partition support, partitions UUID support or the USB download gadget. They aren’t explicitly defined anywhere, but have to be enabled for the build to succeed.

    You can find the patch enabling fastboot on the Seco MX6Q uQ7 here: 0002-secomx6quq7-enable-fastboot.patch.

    U-Boot enters the fastboot mode on demand: it has to be explicitly started from the U-Boot command line:

    U-Boot> fastboot
    

    From now on, U-Boot waits over USB for the host computer to send fastboot commands.

    Using fastboot on the host computer side

    Fastboot needs a user-space program on the host computer side to talk to the board. This tool can be found in the Android SDK and is often available through packages in many Linux distributions. However, to make things easier and like we did for imx-usb-loader, we sent a patch to add the Android tools such as fastboot and adb to the Buildroot build system. As of this writing, our patch is still waiting to be applied by the Buildroot maintainers.

    Thanks to this, we can use the fastboot tool to list the available fastboot devices connected:

    # fastboot devices
    

    Flashing eMMC partitions

    For its flashing feature, fastboot identifies the different parts of the system by names. U-Boot maps those names to the name of GPT partitions, so your eMMC normally requires to be partitioned using a GPT partition table and not an old MBR partition table. For example, provided your eMMC has a GPT partition called rootfs, you can do:

    # fastboot flash rootfs rootfs.ext4
    

    To reflash the contents of the rootfs partition with the rootfs.ext4 image.

    However, while using GPT partitioning is fine in most cases, i.MX6 has a constraint that the bootloader needs to be at a specific location on the eMMC that conflicts with the location of the GPT partition table.

    To work around this problem, we patched U-Boot to allow the fastboot flash command to use an absolute offset in the eMMC instead of a partition name. Instead of displaying an error if a partition does not exists, fastboot tries to use the name as an absolute offset. This allowed us to use MBR partitions and to flash at defined offset our images, including U-Boot. For example, to flash U-Boot, we use:

    # fastboot flash 0x400 u-boot.imx
    

    The patch adding this work around in U-Boot can be found at 0001-fastboot-allow-to-flash-at-a-given-address.patch. We are working on implementing a better solution that can potentially be accepted upstream.

    Automatically starting fastboot

    The fastboot command must be explicitly called from the U-Boot prompt in order to enter fastboot mode. This is an issue for our use case, because the flashing process can’t be fully automated and required a human interaction. Using imx-usb-loader, we want to send a U-Boot image that automatically enters fastmode mode.

    To achieve this, we modified the U-Boot configuration, to start the fastboot command at boot time:

    #define CONFIG_BOOTCOMMAND "fastboot"
    #define CONFIG_BOOTDELAY 0
    

    Of course, this configuration is only used for the U-Boot sent using imx-usb-loader. The final U-Boot flashed on the device will not have the same configuration. To distinguish the two images, we named the U-Boot image dedicated to fastboot uboot_DO_NOT_TOUCH.

    Putting it all together

    We wrote a shell script to automatically launch the modified U-Boot image on the board, and then flash the different images on the eMMC (U-Boot and the root filesystem). We also added an option to flash an MBR partition table as well as flashing a zeroed file to wipe the U-Boot environment. In our project, Buildroot is being used, so our tool makes some assumptions about the location of the tools and image files.

    Our script can be found here: flash.sh. To flash the entire system:

    # ./flash.sh -a
    

    To flash only certain parts, like the bootloader:

    # ./flash.sh -b 
    

    By default, our script expects the Buildroot output directory to be in buildroot/output, but this can be overridden using the BUILDROOT environment variable.

    Conclusion

    By assembling existing tools and mechanisms, we have been able to quickly create a factory flashing process for i.MX6 platforms that is really simple and efficient. It is worth mentioning that we have re-used the same idea for the factory flashing process of the C.H.I.P computer. On the C.H.I.P, instead of using imx-usb-loader, we have used FEL based booting: the C.H.I.P indeed uses an Allwinner ARM processor, providing a different recovery mechanism than the one available on i.MX6.

    by Antoine Ténart at February 15, 2016 09:55 AM

    February 14, 2016

    Harald Welte

    Back from netdevconf 1.1 in Seville

    I've had the pleasure of being invited to netdevconf 1.1 in Seville, spain.

    After about a decade of absence in the Linux kernel networking community, it was great to meet lots of former colleagues again, as well as to see what kind of topics are currently being worked on and under discussion.

    The conference had a really nice spirit to it. I like the fact that it is run by the community itself. Organized by respected members of the community. It feels like Linux-Kongress or OLS or UKUUG or many others felt in the past. There's just something that got lost when the Linux Foundation took over (or pushed aside) virtually any other Linux kernel related event on the planet in the past :/ So thanks to Jamal for starting netdevconf, and thanks to Pablo and his team for running this particular instance of it.

    I never really wanted to leave netfilter and the Linux kernel network stack behind - but then my problem appears to be that there are simply way too many things of interest to me, and I had to venture first into RFID (OpenPCD, OpenPICC), then into smartphone hardware and software (Openmoko) and finally embark on a journey of applied telecoms archeology by starting OpenBSC, OsmocomBB and various other Osmocom projects.

    Staying in Linux kernel networking land was simply not an option with a scope that can only be defined as wide as wanting to implement any possible protocol on any possible interface of any possible generation of cellular network.

    At times like attending netdevconf I wonder if I made the right choice back then. Linux kernel networking is a lot of fun and hard challenges, too - and it is definitely an area that's much more used by many more organizations and individuals: The code I wrote on netfilter/iptables is probably running on billions of devices by now. Compare that to the Osmocom code, which is probably running on a few thousands of devices, if at all. Working on Open Source telecom protocols is sometimes a lonely fight. Not that I wouldn't value the entire team of developers involved in it. to the contrary. But lonely in the context that 99.999% of that world is a proprietary world, and FOSS cellular infrastructure is just the 0.001% at the margin of all of that.

    One the Linux kernel side, you have virtually every IT company putting in their weight these days, and properly funded development is not that hard to come by. In cellular, reasonable funding for anything (compared to the scope and complexity of the tasks) is rather the exception than the norm.

    But no, I don't have any regrets. It has been an interesting journey and I probably had the chance to learn many more things than if I had stayed in TCP/IP-land.

    If only each day had 48 hours and I could work both on Osmocom and on the Linux kernel...

    by Harald Welte at February 14, 2016 11:00 PM

    February 12, 2016

    Video Circuits

    Glass House (1983)




    "Video by G.G. Aries
    Music by Emerald Web
    from California Images: Hi Fi For The Eyes"

    by Chris (noreply@blogger.com) at February 12, 2016 03:16 AM

    February 11, 2016

    Elphel

    NC393 camera is fit for flight

    The components for 10393 and other related circuit boards for the new NC393 camera series have been ordered and contract manufacturing (CM) is ready to assemble the first batch of camera boards.

    In the meantime, the extruded parts that will be made into NC393 camera body have been received at Elphel. The extrusion looks very slick with thin, 1mm walls made out of strong 6061-T6 aluminium, and weighs only 55g. The camera’s new lightweight design is suitable for use on a small aircraft. The heat frame responsible for cooling the powerful processor has also been extruded.

    We are very pleased with the performance of Profile Precision Extrusions located in Phoenix, Arizona, which have delivered a very accurate product ahead of the proposed schedule. Now we can proudly engrave “Made in USA” on the camera, as now even the camera body parts are made in the United States.

    Of course, we have tried to order the extrusion in China, but the intricately detailed profile is difficult to extrude and tolerances were hard to match, so when Profile Precision was recommended to us by local extrusion facilities we were happy to discover the outstanding quality this company offers.

     

    extrusion_393 extrusion_393_heatFrame 4extrusions_393

     

    While waiting for the extruded parts we have been playing with another new toy: the 3D printer. We have been creating prototypes of various camera models of the NC393 series. The cameras are designed and modelled in a 3D virtual environment, and can viewed and even taken apart by mouse click thanks to X3dom technology. The next step is to build actual parts on the 3D printer and physically assemble the camera prototypes, which will allow us to start using the prototypes in the physical world: finding what features are missing, and correcting and finalizing the design. For example, when the mini-panoramic NC393-4PI4 camera prototype was assembled it was clear that it needs the 4 fins (now seen on the final model) to protect the lenses from touching the surfaces as well as to provide shade from the sun. NC393-4PI4 and NC393-4PI4-IMU-GPS are small 360 degree panoramic cameras assembled with 4 fish-eye lenses especially suitable for interior panoramic applications.

    The prototypes are not as slick as the actual aluminium bodies, but they give a very good example of what the actual cameras will look like.

     

    NC393_parts_prototype NC393-M2242-CS_prototype1 NC393-4PI4-IMU-GPS_prototype2

     

    As of today, the 10393 and other boards are in production, the prototypes are being built and tested for design functionality, and the aluminium extrusions have been received. With all this taken care of, we are now less than one month away from the NC393 being offered for sale; the first cameras will be distributed to the loyal Elphel customers who have placed and pre-paid orders several weeks ago.

    by olga at February 11, 2016 10:49 PM

    February 09, 2016

    Harald Welte

    netdevconf 1.1: Running cellular infrastructure on Linux

    Today I had the pleasure of presenting at netdevconf 1.1 a tutorial about Running cellular infrastructure on Linux. The tutorial is intended to guide you through the process of setting up + configuring yur own minimal private GSM+GPRS network.

    The video recording is available from https://www.youtube.com/watch?v=I4i2Gy4JhDo

    Slides are available at http://git.gnumonks.org/index.html/laforge-slides/plain/2016/netdevconf-osmocom/running-foss-gsm.html

    by Harald Welte at February 09, 2016 11:00 PM

    February 04, 2016

    osPID

    Brand New Shinning Website

    We’ve been working hard over the last month or so getting our old website sorted out. Out of date software running on the site, an enormous amount of spam on the forum, and software update mishaps lead us to completely redo everything.  The new website runs completely on WordPress, removing the wiki software (Mediawiki) and the forum software (phpbb). Now, both the forum and wiki are served through WordPress using bbPress and custom posts respectively. We did our best to migrate all content over from the old platforms.  The wiki content came over perfectly, and we were even able to add some updates.  The forum was also ported (posts/topics/accounts)  but we were unable to bring over account passwords.  As a result you will need to do a password reset before using the new forum. We’re sorry about the inconvenience.

    We hope that this  new website will help us better serve the osPID community. Please let us know if there are any broken links or other issues with the website.

    Take care!

    by Phang Moh at February 04, 2016 01:55 PM

    February 03, 2016

    Bunnie Studios

    Help Make “The Essential Guide to Electronics in Shenzhen” a Reality

    Readers of my blog know I’ve been going to Shenzhen for some time now. I’ve taken my past decade of experience and created a tool, in the form of a book, that can help makers, hackers, and entrepreneurs unlock the potential of the electronics markets in Shenzhen. I’m looking for your help to enable a print run of this book, and so today I’m launching a campaign to print “The Essential Guide to Electronics in Shenzhen”.

    As a maker and a writer, the process of creating the book is a pleasure, but I’ve come to dread the funding process. Today is like judgment day; after spending many months writing, I get to find out if my efforts are deemed worthy of your wallet. It’s compounded by the fact that funding a book is a chicken-and-egg problem; even though the manuscript is finished, no copies exist, so I can’t send it to reviewers for validating opinions. Writing the book consumes only time; but printing even a few bound copies for review is expensive.

    In this case, the minimum print run is 1,000 copies. I’m realistic about the market for this book – it’s most useful for people who have immediate plans to visit Shenzhen, and so over the next 45 days I think I’d be lucky if I got a hundred backers. However, I don’t have the cash to finance the minimum print run, so I’m hoping I can convince you to purchase a copy or two of the book in the off-chance you think you may need it someday. If I can hit the campaign’s minimum target of $10,000 (about 350 copies of the book), I’ll still be in debt, but at least I’ll have a hope of eventually recovering the printing and distribution costs.

    The book itself is the guide I wish I had a decade ago; you can have a brief look inside here. It’s designed to help English speakers make better use of the market. The bulk of the book consists of dozens of point-to-translate guides relating to electronic components, tools, and purchasing. It also contains supplemental chapters to give a little background on the market, getting around, and basic survival. It’s not meant to replace a travel guide; its primary focus is on electronics and enabling the user to achieve better and more reliable results despite the language barriers.

    Below is an example of a point-to-translate page:

    For example, the above page focuses on packaging. Once you’ve found a good component vendor, sometimes you find your parts are coming in bulk bags, instead of tape and reel. Or maybe you just need the whole thing put in a shipping box for easy transportation. This page helps you specify these details.

    I’ve put several pages of the guide plus the whole sales pitch on Crowd Supply’s site; I won’t repeat that here. Instead, over the coming month, I plan to post a couple stories about the “making of” the book.

    The reality is that products cost money to make. Normally, a publisher takes the financial risk to print and market a book, but I decided to self-publish because I wanted to add a number of custom features that turn the book into a tool and an experience, rather than just a novel.

    The most notable, and expensive, feature I added are the pages of blank maps interleaved with business card and sample holders.

    Note that in the pre-print prototype above, the card holder pages are all in one section, but the final version will have one card holder per map.

    When comparison shopping in the market, it’s really hard to keep all the samples and vendors straight. After the sixth straight shop negotiating in Chinese over the price of switches or cables, it’s pretty common that I’ll swap a business card, or a receipt will get mangled or lost. These pages enable me to mark the location of a vendor, associate it with a business card and pricing quotation, and if the samples are small (like the LEDs in the picture above) keep the sample with the whole set. I plan on using a copy of the book for every project, so a couple years down the road if someone asks me for another production run, I can quickly look up my suppliers. Keeping the hand-written original receipts is essential, because suppliers will often honor the pricing given on the receipt, even a couple years later, if you can produce it. The book is designed to give the best experience for sourcing components in the Shenzhen electronic markets.

    In order to accommodate the extra thickness of samples, receipts and business cards, the book is spiral-bound. The spiral binding is also convenient for holding a pen to take notes. Finally, the spiral binding also allows you to fold the book flat to a page of interest, allowing both the vendor and the buyer to stare at the same page without fighting to keep the book open. I added an elastic strap in the back cover that can be used as a bookmark, or to help keep the book closed if it starts to get particularly full.

    I also added tabbed pages at the beginning of every major section, to help with quickly finding pages of interest. Physical print books enable a fluidity in human interaction that smartphone apps and eBooks often fail to achieve. Staring at a phone to translate breaks eye contact, and the vendor immediately loses interest; momentum escapes as you scroll, scroll, scroll to the page of interest, struggle with auto-correction on a tiny on-screen keyboard, or worse yet stare at an hourglass as pages load from the cloud. But pull out the book and start thumbing through the pages, the vendor can also see and interact with the translation guide. They become a part of the experience; it’s different, interesting, and keeps their attention. Momentum is preserved as both of you point at various terms on the page to help clarify the transaction.

    Thus, I spent a fair bit of time customizing the physical design of the book to make it into a tool and an experience. I considered the human factors of the Shenzhen electronics market; this book is not just a dictionary. This sort of tweaking can only be done by working with the printer directly; we had to do a bit of creative problem solving to figure out a process that works to bring all these elements together that can also pump out books at a rate fast enough to keep it in the realm of affordability. Of course, the cost of these extra features are reflected in the book’s $35 cover price (discounted to $30 if you back the campaign now), but I think the book’s value as a sourcing and translation tool makes up for its price, especially compared to the cost of plane tickets. Or worse yet, getting the wrong part because of a failure to communicate, or losing track of a good vendor because a receipt got lost in a jumble of samples.

    This all bring me back to the point of this post. Printing the book is going to cost money, and I don’t have the cash to print and inventory the book on my own. If you think someday you might go to Shenzhen, or maybe you just like reading what I write or how the cover looks, please consider backing the campaign. If I can hit the minimum funding target in the next 45 days, it will enable a print run of 1,000 books and help keep it in stock at Crowd Supply.

    Thanks, and happy hacking!

    by bunnie at February 03, 2016 04:13 PM

    ZeptoBARS

    Noname TL431 : weekend die-shot

    Yet another noname TL431.
    Die size 730x571 µm.


    February 03, 2016 05:50 AM

    January 31, 2016

    Harald Welte

    On the OpenAirInterface re-licensing

    In the recent FOSDEM 2016 SDR Devroom, the Q&A session following a presentation on OpenAirInterface touched the topic of its controversial licensing. As I happen to be involved deeply with Free Software licensing and Free Software telecom topics, I thought I might have some things to say about this topic. Unfortunately the Q&A session was short, hence this blog post.

    As a side note, the presentation was actually certainly the least technical presentation in all of the FOSDEM SDR track, and that with a deeply technical audience. And probably the only presentation at all at FOSDEM talking a lot about "Strategic Industry Partners".

    Let me also state that I actually have respect for what OAI/OSA has been and still is doing. I just don't think it is attractive to the Free Software community - and it might actually not be Free Software at all.

    OpenAirInterface / History

    Within EURECOM, a group around Prof. Raymond Knopp has been working on a Free Software implementation of all layers of the LTE (4G) system known as OpenAirInterface. It includes the physical layer and goes through to the core network.

    The OpenAirInterface code was for many years under GPL license (GPLv2, other parts GPLv3). Initially the SVN repositories were not public (despite the license), but after some friendly mails one (at least I) could get access.

    I've read through the code at several points in the past, it often seemed much more like a (quick and dirty?) proof of concept implementation to me, than anything more general-purpose. But then, that might have been a wrong impression on my behalf, or it might be that this was simply sufficient for the kind of research they wanted to do. After all, scientific research and FOSS often have a complicated relationship. Researchers naturally have their papers as primary output of their work, and software implementations often are more like a necessary evil than the actual goal. But then, I digress.

    Now at some point in 2014, a new organization the OpenAirInterface Software Association (OSA) was established. The idea apparently was to get involved with the tier-1 telecom suppliers (like Alcatel, Huawei, Ericsson, ...) and work together on an implementation of Free Software for future mobile data, so-called 5G technologies.

    Telecom Industry and Patents

    In case you don't know, the classic telecom industry loves patents. Pretty much anything and everything is patented, and the patents are heavily enforced. And not just between Samsung and Apple, or more recently also Nokia and Samsung - but basically all the time.

    One of the big reasons why even the most simple UMTS/3G capable phones are so much more expensive than GSM/2G is the extensive (and expensive) list of patents Qualcomm requires every device maker to license. In the past, this was not even a fixed per-unit royalty, but the license depended on the actual overall price of the phone itself.

    So wanting to work on a Free Software implementation of future telecom standards with active support and involvement of the telecom industry obviously means contention in terms of patents.

    Re-Licensing

    The existing GPLv2/GPLv3 license of the OpenAirInterface code of course would have meant that contributions from the patent-holding telecom industry would have to come with appropriate royalty-free patent licenses. After all, of what use is it if the software is free in terms of copyright licensing, but then you still have the patents that make it non-free.

    Now the big industry of course wouldn't want to do that, so the OSA decided to re-license the code-base under a new license.

    As we apparently don't yet have sufficient existing Free Software licenses, they decided to create a new license. That new license (the OSA Public License V1.0 not only does away with copyleft, but also does away with a normal patent grant.

    This is very sad in several ways:

    • license proliferation is always bad. Major experts and basically all major entities in the Free Software world (FSF, FSFE, OSI, ...) are opposed to it and see it as a problem. Even companies like Intel and Google have publicly raised concern about license Proliferation.
    • abandoning copyleft. Many people particularly from a GNU/Linux background would agree that copyleft is a fair deal. It ensures that everyone modifying the software will have to share such modifications with other users in a fair way. Nobody can create proprietary derivatives.
    • taking away the patent grant. Even the non-copyleft Apache 2.0 License the OSA used as template has a broad patent grant, even for commercial applications. The OSA Public License has only a patent grant for use in research context

    In addition to this license change, the OSA also requires a copyright assignment from all contributors.

    Consequences

    What kind of effect does this have in case I want to contribute?

    • I have to sign away my copyright. The OSA can at any given point in time grant anyone whatever license they want to this code.
    • I have to agree to a permissive license without copyleft, i.e. everyone else can create proprietary derivatives of my work
    • I do not even get a patent grant from the other contributors (like the large Telecom companies).

    So basically, I have to sign away my copyright, and I get nothing in return. No copyleft that ensures other people's modifications will be available under the same license, no patent grant, and I don't even keep my own copyright to be able to veto any future license changes.

    My personal opinion (and apparently those of other FOSDEM attendees) is thus that the OAI / OSA invitation to contributions from the community is not a very attractive one. It might all be well and fine for large industry and research institutes. But I don't think the Free Software community has much to gain in all of this.

    Now OSA will claim that the above is not true, and that all contributors (including the Telecom vendors) have agreed to license their patents under FRAND conditions to all other contributors. It even seemed to me that the speaker at FOSDEM believed this was something positive in any way. I can only laugh at that ;)

    FRAND

    FRAND (Fair, Reasonable and Non-Discriminatory) is a frequently invoked buzzword for patent licensing schemes. It isn't actually defined anywhere, and is most likely just meant to sound nice to people who don't understand what it really means. Like, let's say, political decision makers.

    In practise, it is a disaster for individuals and small/medium sized companies. I can tell you first hand from having tried to obtain patent licenses from FRAND schemes before. While they might have reasonable per-unit royalties and they might offer those royalties to everyone, they typically come with ridiculous minimum annual fees.

    For example let's say they state in their FRAND license conditions you have to pay 1 USD per device, but a minimum of USD 100,000 per year. Or a similarly large one-time fee at the time of signing the contract.

    That's of course very fair to the large corporations, but it makes it impossible for a small company who sells maybe 10 to 100 devices per year, as the 100,000 / 10 then equals to USD 10k per device in terms of royalties. Does that sound fair and Non-Discriminatory to you?

    Summary

    OAI/OSA are trying to get a non-commercial / research-oriented foot into the design and specification process of future mobile telecom network standardization. That's a big and difficult challenge.

    However, the decisions they have taken in terms of licensing show that they are primarily interested in aligning with the large corporate telecom industry, and have thus created something that isn't really Free Software (missing non-research patent grant) and might in the end only help the large telecom vendors to uni-directionally consume contributions from academic research, small/medium sized companies and individual hackers.

    by Harald Welte at January 31, 2016 11:00 PM

    January 27, 2016

    January 26, 2016

    Michele's GNSS blog

    uBlox: Galileo, anti-jamming and anti-spoofing firmware

    Just downloaded the firmware upgrade for flash-based M8 modules from uBlox.
    Flashed it in no time.
    The result of UBX-MON-VER is now:



    So checked Galileo in CFG-GNSS:



    Result :)



    Incidentally, there is a "spoofing" flag now as well :O



    Don't dare trying this on M8T...

    by noreply@blogger.com (Michele Bavaro) at January 26, 2016 10:42 PM

    January 22, 2016

    Bunnie Studios

    Novena on the Ben Heck Show

    I love seeing the hacks people do with Novena! Thanks to Ben & Felix for sharing their series of adventures! The custom case they built looks totally awesome, check it out.

    by bunnie at January 22, 2016 04:37 PM

    January 21, 2016

    Bunnie Studios

    Name that Ware January 2016

    The Ware for January 2016 is shown below.

    I just had to replace the batteries on this one, so while it was open I tossed it in the scanner and figured it would make a fun and easy name that ware to start off the new year.

    by bunnie at January 21, 2016 03:37 PM

    Winner, Name that Ware December 2015

    The ware for December 2015 was a Thurlby LA160 logic analyzer. Congrats to Cody Wheeland for nailing it! email me for your prize. Also, thanks to everyone for sharing insights as to why the PCBs developed ripples of solder underneath the soldermask. Fascinating stuff, and now I understand why in PCB processing there’s a step of stripping the tin plate before applying the soldermask.

    by bunnie at January 21, 2016 03:37 PM

    January 19, 2016

    Free Electrons

    ELCE 2015 conference videos available

    ELC Europe 2015 logoAs often in the recent years, the Linux Foundation has shot videos of most of the talks at the Embedded Linux Conference Europe 2015, in Dublin last October.

    These videos are now available on YouTube, and individual links are provided on the elinux.org wiki page that keeps track of presentation materials as well. You can also find them all through the Embedded Linux Conference Europe 2015 playlist on YouTube.

    All this is of course a priceless addition to the on-line slides. We hope these talks will incite you to participate to the next editions of the Embedded Linux Conference, like in San Diego in April, or in Berlin in October this year.

    In particular, here are the videos from the presentations from Free Electrons engineers.

    Alexandre Belloni, Supporting multi-function devices in the Linux kernel

    Kernel maintainership: an oral tradition

    Tutorial: learning the basics of Buildroot

    Our CTO Thomas Petazzoni also gave a keynote (Linux kernel SoC mainlining: Some success factors), which was well attended. Unfortunately, like for some of the other keynotes, no video is available.

    by Michael Opdenacker at January 19, 2016 01:06 PM

    January 15, 2016

    Bunnie Studios

    Making of the Novena Heirloom

    Make is hosting a wonderfully detailed article written by Kurt Mottweiler about his experience making the Novena Heirloom laptop. Check it out!


    by bunnie at January 15, 2016 05:39 PM

    Free Electrons

    Device Tree on ARM article in French OpenSilicium magazine

    Our French readers are most likely aware of the existence of a magazine called OpenSilicium, a magazine dedicated to embedded technologies, with frequent articles on platforms like the Raspberry Pi, the BeagleBone Black, topics like real-time, FPGA, Android and many others.

    Open Silicium #17

    Issue #17 of the magazine has been published recently, and features a 14-pages long article Introduction to the Device Tree on ARM, written by Free Electrons engineer Thomas Petazzoni.

    Open Silicium #17

    Besides Thomas article, many other topics are covered in this issue:

    • A summary of the Embedded Linux Conference Europe 2015 in Dublin
    • Icestorm, a free development toolset for FPGA
    • Using the Armadeus APF27 board with Yocto
    • Set up an embedded Linux system on the Zynq ZedBoard
    • Debugging with OpenOCD and JTAG
    • Usage of the mbed SDK on a small microcontroller, the LPC810
    • From Javascript to VHDL, the art of writing synthetizable code using an imperative language
    • Optimization of the 3R strems decompression algorithm

    by Thomas Petazzoni at January 15, 2016 09:16 AM

    Free Electrons at FOSDEM and the Buildroot Developers Meeting

    FOSDEM 2016The FOSDEM conference will take place on January 30-31 in Brussels, Belgium. Like every year, there are lots of interesting talks for embedded developers, starting from the Embedded, Mobile and Automotive Devroom, but also the Hardware track, the Graphics track. Some talks of the IoT and Security devrooms may also be interesting to embedded developers.

    Thomas Petazzoni, embedded Linux engineer and CTO at Free Electrons, will be present during the FOSDEM conference. Thomas will also participate to the Buildroot Developers Meeting that will take place on February 1-2 in Brussels, hosted by Google.

    by Thomas Petazzoni at January 15, 2016 08:52 AM

    January 14, 2016

    Free Electrons

    Linux 4.4, Free Electrons contributions

    Linux 4.4 is the latest releaseLinux 4.4 has been released, a week later than the normal schedule in order to allow kernel developers to recover from the Christmas/New Year period. As usual, LWN has covered the 4.4 cycle merge window, in two articles: part 1 and part 2. This time around, KernelNewbies has a nice overview of the Linux 4.4 changes. With 112 patches merged, we are the 20th contributing company by number of patches according to the statistics.

    Besides our contributions in terms of patches, some of our engineers have also become over time maintainers of specific areas of the Linux kernel. Recently, LWN.net conducted a study of how the patches merged in 4.4 went into the kernel, which shows the chain of maintainers who pushed the patches up to Linus Torvalds. Free Electrons engineers had the following role in this chain of maintainers:

    • As a co-maintainer of the Allwinner (sunxi) ARM support, Maxime Ripard has submitted a pull request with one patch to the clock maintainers, and pull requests with a total of 124 patches to the ARM SoC maintainers.
    • As a maintainer of the RTC subsystem, Alexandre Belloni has submitted pull requests with 30 patches directly to Linus Torvalds.
    • As a co-maintainer of the AT91 ARM support, Alexandre Belloni has submitted pull requests with 46 patches to the ARM SoC maintainers.
    • As a co-maintainer of the Marvell EBU ARM support, Gregory Clement has submitted pull requests with a total of 33 patches to the ARM SoC maintainers.

    Our contributions for the 4.4 kernel were centered around the following topics:

    • Alexandre Belloni continued some general improvements to support for the AT91 ARM processors, with fixes and cleanups in the at91-reset, at91-poweroff, at91_udc, atmel-st, at91_can drivers and some clock driver improvements.
    • Alexandre Belloni also wrote a driver for the RV8803 RTC from Microcrystal.
    • Antoine Ténart added PWM support for the Marvell Berlin platform and enabled the use of cpufreq on this platform.
    • Antoine Ténart did some improvements in the pxa3xx_nand driver, still in preparation to the addition of support for the Marvell Berlin NAND controller.
    • Boris Brezillon did a number of improvements to the sunxi_nand driver, used for the NAND controller found on the Allwinner SoCs. Boris also merged a few patches doing cleanups and improvements to the MTD subsystem itself.
    • Boris Brezillon enabled the cryptographic accelerator on more Marvell EBU platforms by submitting the corresponding Device Tree descriptions, and he also fixed a few bugs found in the driver
    • Maxime Ripard reworked the interrupt handling of per-CPU interrupts on Marvell EBU platforms especially in the mvneta network driver. This was done in preparation to enable RSS support in the mvneta driver.
    • Maxime Ripard added support for the Allwinner R8 and the popular C.H.I.P platform.
    • Maxime Ripard enabled audio support on a number of Allwinner platforms, by adding the necessary clock code and Device Tree descriptions, and also several fixes/improvements to the ALSA driver.

    The details of our contributions for 4.4:

    by Thomas Petazzoni at January 14, 2016 02:32 PM

    January 13, 2016

    Michele's GNSS blog

    NT1065 review

    So I finally came about testing the NT1065… apologies for the lack of detail but I have done this in my very little spare time. Also, I would like to clarify that I am in no way affiliated to NTLab.

    Chip overview

    A picture speaks more than a thousand words.
    Figure 1: NT1065 architecture
    Things worth noting above are:
    • Four independent input channels with variable RF gain, so up to 4 distinct antennas can be connected;
    • Two LOs controlled by integer synthesizers, one per pair of channels, tuned respectively for the high and low RNSS band, but one can choose to route the upper LO to the lower pair and have 4 phase coherent channels
    • ADC sample rate derived from either LO through integer division
    • 4 independent image-reject mixers, IF filters and variable gain (with AGC) paths
    • Four independent outputs, either as a CMOS two bit ADC or analogue differential so one could
      • connect his/her own ADC or
      • phase-combine the IF outputs in a CRPA fashion prior to digitisation
    • standard SPI port control
    Another important point for a hardware designer (I used to be a little bit of that) is this:
    Figure 2: NT1065 application schematic
    The pin allocation shows a 1 cm2 QFN88 (with 0.4mm pin step) with plenty of room between the pins and an optimal design for easy routing of the RF and IF channels. Packages like that aren’t easy to find nowadays for such complex RF ICs (everything is a BGA or WLCSP) but I love QFNs because they are easy to solder with a bit of SMD practice and can be “debugged” if the PCB layout is not perfect first time.

    Evaluation kit overview

    The evaluation kit presents itself like this:
    Figure 3: NT1065 evaluation kit
    One can see the RF inputs at the top, the external reference clock input on the left, the control interface on the right and the IF/digital part on the bottom. The large baluns (for differential to single ended conversion) were left unpopulated for me as I don’t use redpitaya (yet?). The control board is the same used for the NT1036.
    In configured the evaluation kit to be powered by the control board (it was an error, see later) and connected the ADC outputs and clock to the Spartan6 on the SdrNav40, used here simply as USBHS DAQ. In total, there is one clock like and 8 data lines (4 pairs of SIGN/MAGN, one per channel).
    The IF filters act on the Lower Side Band (LSB) or the Upper Side Band (USB) for respectively high and low injection mixing and can be configured for a cutoff frequency between 10 and 35 MHz. Thus, bandwidths of up to 30 MHz per signal can be accommodated and the minimum ADC sampling rate should be around 20 Msps. 20 MByte/sec are not easy to handle for a USBHS controller, so I will look into other more suitable  (but still cost effective) DAQ options to evaluate the front-end. In the meantime, I could do a lot with 32MByte/sec of the FX2LP by testing either 2 channels only with 2 bit or all the 4 channels with 1 bit and compressing nibbles into bytes (halving the requested rate).
    The evaluation software is a single window, very simple and intuitive to use but very effective.
    Figure 4: Evaluation software
    The software comes with several sample configuration files that can be very useful to quickly start evaluating the chip.

    Tests

    All my tests used a good 10MHz CMOS reference.

    GPS L1

    The first test was GPS L1 in high injection mode setting the first LO to 1590 MHz (R1=1, N1=159), leading to an IF of -14.58 MHz, a filter bandwidth of about 28 MHz and a sampling frequency of 53 Msps (K1/2=15). I streamed one minute to the disk and verified correct operation.
    Figure 5:GPS L1 PSD (left) and histogram+time series (right)
    Figure 6: G30 correlation of L1 code detail (left) and all satellites (right).

    GPS L1/L5

    When performing this test I bumped into a hardware problem. If the control board powers the NT1065 evaluation kit with its internal 3.3V reference, the power line is gated by a small resistor thus the voltage depends on the current drawn by the chip (undesirable!). Enabling the second channel in the GUI made the chip draw more current so the voltage on the evaluation kit decreased away from the SdrNav40 one which was steady at 3.3V. Level mismatch created unreliability in reading the digital levels and failure to transfer meaningful data. So I powered the evaluation kit with the SdrNav40 3.3V voltage reference and everything was happy again.
    In this configuration L1 is again at -14.58 MHz (1590 MHz high side injection) and L5 is on the third channel (low RNSS) at -13.55 MHz (R2=1, N2=119 for 1190 MHz high side injection). To be noted the relative large spike in the spectrum at 1166 MHz, not an obvious harmonic so it could be some unwanted emission from neighbour equipment.
    Figure 7: L5 PSD (left) and histogram+time series (right)
    Figure 8: G30 correlation of L5 code detail (left) and all satellites (right).
    Interestingly, the Matlab satellite search algorithm returns respectively for L1 and L5:
    Searching GPS30 -> found: Doppler +4500.0 CodeShift:  35226 xcorr: 12502.4
    Searching GPS30 -> found: Doppler +3000.0 CodeShift:  35226
    The above outputs show coarse but correctly scaled Doppler [Hz] and a perfect match in code delay [samples] (just by chance spot on).

    4x GPS L1

    In this case I enabled all 4 channels and shared the LO amongst them all. Unfortunately I cannot show the 6dB increase in gain when steering a beam to a satellite as all RF inputs were connected to the same antenna and -being the noise the same- steering the phase is useless. However, it is possible to verify how the phase amongst the channels is perfectly coherent (requirement for an easy CRPA).
    The signals were conveniently brought to baseband, filtered and decimated by 5, resulting into a 10.6 MHz sampling rate. As one can see below the power was well matched and the inter-channel carrier phase is extremely steady and constant over the 60 seconds capture time. In the very case of zero-baseline, one can easily check that such phase difference is also the same across different satellites (as it does not depend on geometry but just on different path lengths beyond the splitter).
    Figure 9:PSD of the IF obtained from the 4 channels and relative carrier phase

    GPS L1 + Glonass G1 + GPS L5 + Glonass G3

    I wanted to verify here reception of Glonass G1 on the second channel (upper side band). At this point it had become merely a formality. Glonass CH0 is at +12 MHz so the acquisition returned correctly as shown below.  Of course 53 Msps for a BPSK(0.5) is a bit of an overkill :)
    Figure 10: Glonass acquisition all satellites (left) and CH-5 detail (right).

    GPS L1 + Beidou B1 + GPS L5 + Galileo E5b

    The case for GPS and Beidou was a bit more challenging as the distance between L1 and B1 is only 14.322 MHz, thus the IFs must be around 7 MHz. I decided to set the LO to 1570 MHz (R1=1, N1=157). So GPS went upper side band on channel 1 at +5.42 MHz IF. Beidou consequently went on lower side band on channel 2 at -8.902 MHz. Channel 3 and 4 were enabled with LO2 set at 1190 MHz: in the middle between E5a and E5b in order to verify AltBOC reception.
    As 1570 MHz is a nasty frequency to generate a round sampling frequency value I decide to derive the clock from LO2 using K2/2 = 10 and therefore stream at 59.5 Msps. As one can see below the L1 peak has moved very close to baseband now and the sampling frequency is quite exceeding the Nyquist limit.
    Figure 11: GPS acquisition with close-in IF
    Figure 12: Beidou B1 sprectrum (MSS on the right) and acquisition (incidentally also showing IGSO generation 3 satellites C31 and C32).
    Figure 13: E5a acquisition of E30
    Figure 14: E5b acquisition of E30, showing a perfect match in code delay with E5a as one would expect.

    Conclusions and work to do

    I am very suprised of how little time took me from unboxing the kit to sucessfully using it to acquire all the GNSS signals I could think and test all configurations. Of course I had the former experience with the NT1036 but this time I had the perception of a solid, feature-rich, plug-and-play IC.
    In my todo list there is the extension of this post with a home-made measurement of channel isolation.. and the way I plan to do it should be interesting to the readers :)

    by noreply@blogger.com (Michele Bavaro) at January 13, 2016 09:49 PM

    January 11, 2016

    Altus Metrum

    Altos1.6.2

    AltOS 1.6.2 — TeleMega v2.0 support, bug fixes and documentation updates

    Bdale and I are pleased to announce the release of AltOS version 1.6.2.

    AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, STMF042, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

    This is a minor release of AltOS, including support for our new TeleMega v2.0 board, a small selection of bug fixes and a major update of the documentation

    AltOS Firmware — TeleMega v2.0 added

    The updated six-channel flight computer, TeleMega v2.0, has a few changes from the v1.0 design:

    • CC1200 radio chip instead of the CC1120. Better receive performance for packet mode, same transmit performance.

    • Serial external connector replaced with four PWM channels for external servos.

    • Companion pins rewired to match EasyMega functionality.

    None of these change the basic functionality of the device, but they do change the firmware a bit so there's a new package.

    AltOS Bug Fixes

    We also worked around a ground station limitation in the firmware:

    • Slow down telemetry packets so receivers can keep up. With TeleMega v2 offering a fast CPU and faster radio chip, it was overrunning our receivers so a small gap was introduced between packets.

    AltosUI and TeleGPS applications

    A few minor new features are in this release

    • Post-flight TeleMega and EasyMega orientation computations were off by a factor of two

    • Downloading eeprom data from flight hardware would bail if there was an error in a data record. Now it keeps going.

    Documentation

    I spent a good number of hours completely reformatting and restructuring the Altus Metrum documentation.

    • I've changed the source format from raw docbook to asciidoc, which has made it much easier to edit and to use docbook features like links.

    • The css moves the table of contents out to a sidebar so you can navigate the html format easily.

    • There's a separate EasyMini manual now, constructed by taking sections from the larger manual.

    by keithp's rocket blog at January 11, 2016 05:03 AM

    January 03, 2016

    Harald Welte

    Conferences I look forward to in 2016

    While I was still active in the Linux kernel development / network security field, I was regularly attending 10 to 15 conferences per year.

    Doing so is relatively easy if you earn a decent freelancer salary and are working all by yourself. Running a company funded out of your own pockets, with many issues requiring (or at least benefiting) from personal physical presence in the office changes that.

    Nevertheless, after some years of being less of a conference speaker, I'm happy to see that the tide is somewhat changing in 2016.

    After my talk at 32C3, I'm looking forward to attending (and sometimes speaking) at events in the first quarter of 2016. Not sure if I can keep up that pace in the following quarters...

    FOSDEM

    FOSDEM (http://fosdem.org/2016) a classic, and I don't even remember for how many years I've been attending it. I would say it is fair to state it is the single largest event specifically by and for community-oriented free software developers. Feels like home every time.

    netdevconf 1.1

    netdevconf (http://www.netdevconf.org/1.1/) is actually something I'm really looking forward to. A relatively new grass-roots conference. Deeply technical, and only oriented towards Linux networking hackers. The part of the kernel community that I've known and loved during my old netfilter days.

    I'm very happy to attend the event, both for its technical content, and of course to meet old friends like Jozsef, Pablo, etc. I also read that Kuninhiro Ishiguro will be there. I always adored his initial work on Zebra (whose vty code we coincidentally use in almost all osmocom projects as part of libosmovty).

    It's great to again see an event that is not driven by commercial / professional conference organizers, high registration fees, and corporate interests. Reminds me of the good old days where Linux was still the underdog and not mainstream... Think of Linuxtag in its early days?

    Linaro Connect

    I'll be attending Linaro Connect for the first time in many years. It's a pity that one cannot run various open source telecom protocol stack / network element projects and a company and at the same time still be involved deeply in Embedded Linux kernel/system development. So I'll use the opportunity to get some view into that field again - and of course meet old friends.

    OsmoDevCon

    OsmoDevCon is our annual invitation-only developer meeting of the Osmocom developers. It's very low-profile, basically a no-frills family meeting of the Osmocom community. But really great to meet with all of the team and hearing about their respective experiences / special interest topics.

    TelcoSecDay

    This (https://www.troopers.de/events/troopers16/580_telcosecday_2016_invitation_only/) is another invitation-only event, organized by the makers of the TROOPERS conference. The idea is to make folks from the classic Telco industry meet with people in IT Security who are looking at Telco related topics. I've been there some years ago, and will finally be able to make it again this year to talk about how the current introduction of 3G/3.5G into the Osmocom network side elements can be used for security research.

    by Harald Welte at January 03, 2016 11:00 PM

    January 01, 2016

    Michele's GNSS blog

    Happy begin of 2016

    2015 just passed. I don't write much here anymore as time has become a very precious resource and my job imposes tight limitations on what one can or cannot write on the web.
    The yearly update will quickly cover constellation the status, some info on low cost RTK developments and some more SDR thoughts (although the most significant article in that respect will come soon in another post).

    Constellation updates


    As retrieved from Tomoji Takasu's popular diary, 2015 has seen the following launches:

    Date/Time (UTC)     Satellite             Orbit   Launcher        Launch Site               Notes
    2015/03/25 18:36    GPS Block IIF-9       MEO     Delta-IV        Cape Canaveral, US        G26
    2015/03/27 21:46    Galileo FOC-3, 4      MEO     Soyuz ST-B      Kourou, French Guiana     E26, E22
    2015/03/28 11:49    IRNSS-1D              IGSO    PSLV            Satish Dhawan SC, India   111.75E
    2015/03/31 13:52    BeiDou-3 I1           IGSO    Long March 3C   Xichang, China            C15
    2015/07/15 15:36    GPS Block IIF-10      MEO     Atlas-V         Cape Canaveral, US        G08
    2015/07/25 12:28    BeiDou-3 M1-S, M2-S   MEO     Long March 3B   Xichang, China            ?
    2015/09/10 02:08    Galileo FOC-5, 6      MEO     Soyuz ST-B      Kourou, French Guiana     E24, E30
    2015/09/29 23:23    BeiDou-3 I2-S         IGSO    Long March 3B   Xichang, China            ?
    2015/10/30 16:13    GPS Block IIF-11      MEO     Atlas-V         Cape Canaveral, US        G10
    2015/11/10 21:34    GSAT-15 (GAGAN)       GEO     Ariane 5        Kourou, French Guiana     93.5E
    2015/12/17 11:51    Galileo FOC-8, 9      MEO     Soyuz ST-B      Kourou, French Guiana     E??, E??


    GPS 

    GPS replaced three IIA birds with brand new IIF, as one can see Figure 1. The number of GPS satellites transmittiing L5 raised now to 11 (as one can also verify with UNAVCO). The number of GPS with L2C is instead 18 (quite close to a nominal constellation!). The question is now how GPS will proceed in 2016 and beyond, having seen the delays that afffect OCX and in general the bad comments (see e.g. 1 and 2) on the progress of modernisation of GPS.
    Figure 1: One year of GPS observations, obtained using a bespoke tool from the freely available data courtesy of the IGS network.
    Glonass

    Stable situation here, as seen in Figure 2, with the only exception of PRN 17 going offline in mid-October (perhaps soon to be replaced according to the table of upcoming launches)
    Figure 2: One year of Glonass observations
    Galileo

    The situation has been very "dynamic" for Galileo but is indeed very promising as seen in Figure 3. The latest launch went well and we can hope for several signals in space in 2016: hopefully the year that Galileo will make its appeareance in most consumer devices. Incidentally, there are as of today 8 satellites broadcasting E5a.




    Beidou 

    Also for Beidou the situation is rapidly evolving as can be seen in Figure 4. My colleague James and I did a detailed study on the new generation satellites and published part of it on GPSWorld. Indeed 3rd generation test birds host a very versatile payload that allows them to broadcast modern navigation signals on three frequencies. Incidentally C34 and C33 (the two MEO space vehicles) also broadcast a QPSK on E5a.
    Figure 4: One year of Beidou observations.

    Low cost RTK

    An awful lot of progress here, with NVS, Skytraq, Geostar Navigation and uBlox releasing multi-constellation single frequency products for RTK.

    NVS released two products with onboard GPS+Glonass (upgradeable to Galileo) RTK engine: NV08C-RTK (for standard base-rover configuration) and NV08C-RTK-A (with added dual antenna heading determination for precision AG). Rumors say that they both run an highly reworked version of RTKLIB on a LPC32xx microcontroller (ARM926EJ-S processor with VFP unit). The price is not public, but again rumors suggest it is a few hundreds of EUR a piece (in small quantities) for the single receiver version. I got my hands on a couple of boards and build a simple adapter board to be able to use them with a standard laptop and a wireless module fitting the Xbee socket (including this one).



    Skytraq has built on its Navspark initiative and came out with two groundbreaking products S2525F8-RTK
    and S2525F8-BD-RTK. The -I shall say- provocative price of 50 and 150 USD respectively sets a new threshold very hard to beat. Skytraq has also done extensive analysis on the performance of GPS only versus GPS+Beidou single frequency RTK, e.g. here and here. In Asia the dual constellation (2x CDMA) single frequency (1540x and 1526x f0)  RTK shows incredibly promising results, mainly due to the impressive number of birds in view. I got my hands on a couple of plug&play evaluation kits and already verified the sub-minute convergence time to fix in zero baseline and good visibility conditions.



    Geostar Navigation has also recently released the GeoS-3MR which is practically identical in terms of capability to the GeoS-3 and GeoS-3M, but has a factory setting such that the most recent firmware provides carrier phase for both GPS and Glonass. Although Glonass phase is not calibrated, last month statements from Tomoji suggest that this feature could be incorporated in v2.4.3 anyway.
    A few years ago I had designed and produced some carrier boards for GeoS-3M so I could just place an order for a few raw-capable chips (at 25 USD each) and test them out. The software provided by the manufacturer (Demo3 and toRNX) allows to extract Rinex observations from the binary logs. At the time I had also developed some parser code for RTKLIB but I now found out that it has a small issue.. I don't feel like reinstalling C++ Builder just to fix it but anyone please feel free to take that code and push it to v2.4.3.