copyleft hardware planet

February 12, 2016

Free Electrons

Factory flashing with U-Boot and fastboot on Freescale i.MX6

Introduction

For one of our customers building a product based on i.MX6 with a fairly low-volume, we had to design a mechanism to perform the factory flashing of each product. The goal is to be able to take a freshly produced device from the state of a brick to a state where it has a working embedded Linux system flashed on it. This specific product is using an eMMC as its main storage, and our solution only needs a USB connection with the platform, which makes it a lot simpler than solutions based on network (TFTP, NFS, etc.).

In order to achieve this goal, we have combined the imx-usb-loader tool with the fastboot support in U-Boot and some scripting. Thanks to this combination of a tool, running a single script is sufficient to perform the factory flashing, or even restore an already flashed device back to a known state.

The overall flow of our solution, executed by a shell script, is:

  1. imx-usb-loader pushes over USB a U-Boot bootloader into the i.MX6 RAM, and runs it;
  2. This U-Boot automatically enters fastboot mode;
  3. Using the fastboot protocol and its support in U-Boot, we send and flash each part of the system: partition table, bootloader, bootloader environment and root filesystem (which contains the kernel image).
The SECO uQ7 i.MX6 platform used for our project.

The SECO uQ7 i.MX6 platform used for our project.

imx-usb-loader

imx-usb-loader is a tool written by Boundary Devices that leverages the Serial Download Procotol (SDP) available in Freescale i.MX5/i.MX6 processors. Implemented in the ROM code of the Freescale SoCs, this protocol allows to send some code over USB or UART to a Freescale processor, even on a platform that has nothing flashed (no bootloader, no operating system). It is therefore a very handy tool to recover i.MX6 platforms, or as an initial step for factory flashing: you can send a U-Boot image over USB and have it run on your platform.

This tool already existed, we only created a package for it in the Buildroot build system, since Buildroot is used for this particular project.

Fastboot

Fastboot is a protocol originally created for Android, which is used primarily to modify the flash filesystem via a USB connection from a host computer. Most Android systems run a bootloader that implements the fastboot protocol, and therefore can be reflashed from a host computer running the corresponding fastboot tool. It sounded like a good candidate for the second step of our factory flashing process, to actually flash the different parts of our system.

Setting up fastboot on the device side

The well known U-Boot bootloader has limited support for this protocol:

The fastboot documentation in U-Boot can be found in the source code, in the doc/README.android-fastboot file. A description of the available fastboot options in U-Boot can be found in this documentation as well as examples. This gives us the device side of the protocol.

In order to make fastboot work in U-Boot, we modified the board configuration file to add the following configuration options:

#define CONFIG_CMD_FASTBOOT
#define CONFIG_USB_FASTBOOT_BUF_ADDR       CONFIG_SYS_LOAD_ADDR
#define CONFIG_USB_FASTBOOT_BUF_SIZE          0x10000000
#define CONFIG_FASTBOOT_FLASH
#define CONFIG_FASTBOOT_FLASH_MMC_DEV    0

Other options have to be selected, depending on the platform to fullfil the fastboot dependencies, such as USB Gadget support, GPT partition support, partitions UUID support or the USB download gadget. They aren’t explicitly defined anywhere, but have to be enabled for the build to succeed.

You can find the patch enabling fastboot on the Seco MX6Q uQ7 here: 0002-secomx6quq7-enable-fastboot.patch.

U-Boot enters the fastboot mode on demand: it has to be explicitly started from the U-Boot command line:

U-Boot> fastboot

From now on, U-Boot waits over USB for the host computer to send fastboot commands.

Using fastboot on the host computer side

Fastboot needs an user-space program on the host computer side to talk to the board. This tool can be found in the Android SDK and is often available through packages in many Linux distributions. However, to make things easier and like we did for imx-usb-loader, we sent a patch to add the Android tools such as fastboot and adb to the Buildroot build system. As of this writing, our patch is still waiting to be applied by the Buildroot maintainers.

Thanks to this, we can use the fastboot tool to list the available fastboot devices connected:

# fastboot devices

Flashing eMMC partitions

For its flashing feature, fastboot identifies the different parts of the system by names. U-Boot maps those names to the name of GPT partitions, so your eMMC normally requires to be partitioned using a GPT partition table and not an old MBR partition table. For example, provided your eMMC has a GPT partition called rootfs, you can do:

# fastboot flash rootfs rootfs.ext4

To reflash the contents of the rootfs partition with the rootfs.ext4 image.

However, while using GPT partitioning is fine in most cases, i.MX6 has a constraint that the bootloader needs to be at a specific location on the eMMC that conflicts with the location of the GPT partition table.

To work around this problem, we patched U-Boot to allow the fastboot flash command to use an absolute offset in the eMMC instead of a partition name. Instead of displaying an error if a partition does not exists, fastboot tries to use the name as an absolute offset. This allowed us to use MBR partitions and to flash at defined offset our images, including U-Boot. For example, to flash U-Boot, we use:

# fastboot flash 0x400 u-boot.imx

The patch adding this work around in U-Boot can be found at 0001-fastboot-allow-to-flash-at-a-given-address.patch. We are working on implementing a better solution that can potentially be accepted upstream.

Automatically starting fastboot

The fastboot command must be explicitly called from the U-Boot prompt in order to enter fastboot mode. This is an issue for our use case, because the flashing process can’t be fully automated and required a human interaction. Using imx-usb-loader, we want to send a U-Boot image that automatically enters fastmode mode.

To achieve this, we modified the U-Boot configuration, to start the fastboot command at boot time:

#define CONFIG_BOOTCOMMAND "fastboot"
#define CONFIG_BOOTDELAY 0

Of course, this configuration is only used for the U-Boot sent using imx-usb-loader. The final U-Boot flashed on the device will not have the same configuration. To distinguish the two images, we named the U-Boot image dedicated to fastboot uboot_DO_NOT_TOUCH.

Putting it all together

We wrote a shell script to automatically launch the modified U-Boot image on the board, and then flash the different images on the eMMC (U-Boot and the root filesystem). We also added an option to flash an MBR partition table as well as flashing a zeroed file to wipe the U-Boot environment. In our project, Buildroot is being used, so our tool makes some assumptions about the location of the tools and image files.

Our script can be found here: flash.sh. To flash the entire system:

# ./flash.sh -a

To flash only certain parts, like the bootloader:

# ./flash.sh -b 

By default, our script expects the Buildroot output directory to be in buildroot/output, but this can be overridden using the BUILDROOT environment variable.

Conclusion

By assembling existing tools and mechanisms, we have been able to quickly create a factory flashing process for i.MX6 platforms that is really simple and efficient. It is worth mentioning that we have re-used the same idea for the factory flashing process of the C.H.I.P computer. On the C.H.I.P, instead of using imx-usb-loader, we have used FEL based booting: the C.H.I.P indeed uses an Allwinner ARM processor, providing a different recovery mechanism than the one available on i.MX6.

by Antoine Ténart at February 12, 2016 04:06 PM

Video Circuits

Glass House (1983)




"Video by G.G. Aries
Music by Emerald Web
from California Images: Hi Fi For The Eyes"

by Chris (noreply@blogger.com) at February 12, 2016 03:16 AM

February 11, 2016

Elphel

NC393 camera is fit for flight

The components for 10393 and other related circuit boards for the new NC393 camera series have been ordered and contract manufacturing (CM) is ready to assemble the first batch of camera boards.

In the meantime, the extruded parts that will be made into NC393 camera body have been received at Elphel. The extrusion looks very slick with thin, 1mm walls made out of strong 6061-T6 aluminium, and weighs only 55g. The camera’s new lightweight design is suitable for use on a small aircraft. The heat frame responsible for cooling the powerful processor has also been extruded.

We are very pleased with the performance of Profile Precision Extrusions located in Phoenix, Arizona, which have delivered a very accurate product ahead of the proposed schedule. Now we can proudly engrave “Made in USA” on the camera, as now even the camera body parts are made in the United States.

Of course, we have tried to order the extrusion in China, but the intricately detailed profile is difficult to extrude and tolerances were hard to match, so when Profile Precision was recommended to us by local extrusion facilities we were happy to discover the outstanding quality this company offers.

 

extrusion_393 extrusion_393_heatFrame 4extrusions_393

 

While waiting for the extruded parts we have been playing with another new toy: the 3D printer. We have been creating prototypes of various camera models of the NC393 series. The cameras are designed and modelled in a 3D virtual environment, and can viewed and even taken apart by mouse click thanks to X3dom technology. The next step is to build actual parts on the 3D printer and physically assemble the camera prototypes, which will allow us to start using the prototypes in the physical world: finding what features are missing, and correcting and finalizing the design. For example, when the mini-panoramic NC393-4PI4 camera prototype was assembled it was clear that it needs the 4 fins (now seen on the final model) to protect the lenses from touching the surfaces as well as to provide shade from the sun. NC393-4PI4 and NC393-4PI4-IMU-GPS are small 360 degree panoramic cameras assembled with 4 fish-eye lenses especially suitable for interior panoramic applications.

The prototypes are not as slick as the actual aluminium bodies, but they give a very good example of what the actual cameras will look like.

 

NC393_parts_prototype NC393-M2242-CS_prototype1 NC393-4PI4-IMU-GPS_prototype2

 

As of today, the 10393 and other boards are in production, the prototypes are being built and tested for design functionality, and the aluminium extrusions have been received. With all this taken care of, we are now less than one month away from the NC393 being offered for sale; the first cameras will be distributed to the loyal Elphel customers who have placed and pre-paid orders several weeks ago.

by olga at February 11, 2016 10:49 PM

February 08, 2016

Free Electrons

Free Electrons contributes Linux support for a first ARM64 platform: Marvell Armada 3700

Marvell Armada 3700Over the last years, Free Electrons has become a strong participant to the Linux ARM kernel community, with our engineers upstreaming support for numerous ARM 32 bits platforms.

Now, with ARM64 becoming more and more mainstream, our focus in 2016 will shift towards this architecture, and we’re happy to announce that we have recently contributed to the upstream Linux kernel the initial support for our first ARM64 architecture: the Marvell Armada 3700.

This new SoC from Marvell is available in single-core and dual-core Cortex-A53 configurations, and features a wide range of peripherals: 2 Gigabit Ethernet controllers, USB 3.0 and 2.0, SATA, PCIe interfaces, DMA engines for XOR acceleration, and of course the usual SPI, I2C, UART, GPIO, SDIO interfaces. For more details, see the Product Brief.

So far, we have sent a patch series adding minimal support for this platform:

  • A UART driver, since this SoC uses a new specific UART controller
  • Small changes to an AHCI driver to support SATA.
  • The Device Tree files describing the SoC and the currently available development board. So far, only the CPU, timers, UART0, USB 3.0, SATA and GIC interrupt controllers are described.

A second version of the patch series was sent a few days later, in order to address comments received during the review.

It is worth mentioning that this SoC was publicly announced in a press release on January 6 2016, and we’ve been able to send the initial support patches on February 2, 2016, less than a month later.

We’ll be progressively submitting support for all the other hardware blocks of the Armada 3700, and also be announcing soon our development efforts on several other ARM64 platforms.

by Thomas Petazzoni at February 08, 2016 09:45 AM

February 05, 2016

Free Electrons

2016 Q1 newsletter

Newsletter iconThis article was published on our quarterly newsletter.

The Free Electrons team wishes you a Happy New Year for 2016, with many new bits to enjoy in your life!

Free Electrons is happy to take this opportunity to share some news about the latest training and contribution activities of the company.

Free Electrons work on the $9 computer

As announced in our previous newsletter, Free Electrons has been working intensively on developing the low-level software support for the first $9 computer, the C.H.I.P by Next Thing Co.

Next Thing Co. has successfully delivered an initial batch of platforms in September to the early adopters, and has started shipping the final products in December to thousands of Kickstarter supporters.

Those products are using the U-Boot and Linux kernel ported by Free Electrons engineers, with numerous patches submitted to the official projects and more to be submitted in the coming weeks and months:

  • Support for the C.H.I.P platform itself, in U-Boot and in the Linux kernel;
  • Support for audio on Allwinner platforms added to the Linux kernel;
  • Development of a DRM/KMS driver for the graphics controller found on Allwinner platforms;
  • Significant research effort on finding appropriate solutions to support Multi-Level Cell NANDs in the Linux kernel;
  • Enabling of the NAND storage in Single-Level Cell mode, until the Multi-Level Cell mode can be enabled reliably;
  • Addition of NAND support in the fastboot implementation of U-Boot, which is used to reflash the C.H.I.P.

We will continue to work on the C.H.I.P over the next months, with among other things more work on the graphics side and the NAND side.

Kernel contributions

The primary focus of the majority of our customer projects remain the Linux kernel, to which we continue to contribute very significantly.

Linux 4.2

We contributed 203 patches to this release, with a new IIO driver for the ADC found on Marvell Berlin platforms, a big cleanup to the support of Atmel platforms, improvements to the DMA controller driver for Atmel platforms, a completely new driver for the cryptographic accelerator found on Marvell EBU platforms.

In this cycle, our engineer Alexandre Belloni became the official maintainer of the RTC subsystem.

See details on our contributions to Linux 4.2

Linux 4.3

We contributed 110 patches to this release, with mainly improvements to the DRM/KMS driver and DMA controller driver for Atmel platforms and power management improvements for Marvell platforms.

See details on our contributions to Linux 4.3

Linux 4.4

We contributed 112 patches to this release, the main highlights being an additional RTC driver, a PWM driver, support for the C.H.I.P platform, and improvements to the NAND support.

See details on our contributions to Linux 4.4

Work on ARM 64-bit platform

We have started to work on supporting the Linux kernel on several ARM 64 bits platforms from different vendors. We will be submitting the initial patches in the coming weeks and will progressively improve the support for those platforms throughout 2016 where a major part of our Linux kernel contribution effort will shift to ARM 64-bit.

Growing engineering team

Our engineering team, currently composed of six engineers, will be significantly expanded in 2016:

  • Two additional embedded Linux engineers will join us in March 2016 and will be working with our engineering team in Toulouse, France. They will help us on our numerous Linux kernel and Linux BSP projects.
  • An engineering intern will join us starting early February, and will work on setting up a board farm to contribute to the kernelci.org automated testing effort. This will help us do more automated testing on the ARM platforms we work on.

Upcoming training sessions

We have public training sessions scheduled for the beginning of 2016:

Embedded Linux development training
February 29 – March 4, in English, in Avignon (France)
Embedded Linux kernel and driver development training
March 14-18, in English, in Avignon (France)
Android system development training
March 7-10, in English, in Toulouse (France)

We also offer the following training courses, on-site, anywhere in the world, upon request:

Contact us at training@free-electrons.com for details.

Conferences

We participated to the Embedded Linux Conference Europe in Dublin in October 2015, and gave a number of talks:

In addition, our engineer Thomas Petazzoni was invited to the Linux Kernel Summit, an invitation-only conference for the kernel maintainers and developers. He participated to the three days event in Seoul, South Korea. See Free Electrons at the Linux Kernel Summit 2015.

At the beginning of 2016, our entire engineering team will be attending the Embedded Linux Conference in San Diego (US), which means that no less than 9 engineers from Free Electrons will be present at the conference!

Porting Linux on ARM seminar

In December 2015, we gave a half-day seminar entitled “Porting Linux on ARM” in Toulouse (France). The materials, in English, are now freely available on our web site.

by Michael Opdenacker at February 05, 2016 05:50 AM

February 04, 2016

osPID

Brand New Shinning Website

We’ve been working hard over the last month or so getting our old website sorted out. Out of date software running on the site, an enormous amount of spam on the forum, and software update mishaps lead us to completely redo everything.  The new website runs completely on WordPress, removing the wiki software (Mediawiki) and the forum software (phpbb). Now, both the forum and wiki are served through WordPress using bbPress and custom posts respectively. We did our best to migrate all content over from the old platforms.  The wiki content came over perfectly, and we were even able to add some updates.  The forum was also ported (posts/topics/accounts)  but we were unable to bring over account passwords.  As a result you will need to do a password reset before using the new forum. We’re sorry about the inconvenience.

We hope that this  new website will help us better serve the osPID community. Please let us know if there are any broken links or other issues with the website.

Take care!

by Phang Moh at February 04, 2016 01:55 PM

February 03, 2016

Bunnie Studios

Help Make “The Essential Guide to Electronics in Shenzhen” a Reality

Readers of my blog know I’ve been going to Shenzhen for some time now. I’ve taken my past decade of experience and created a tool, in the form of a book, that can help makers, hackers, and entrepreneurs unlock the potential of the electronics markets in Shenzhen. I’m looking for your help to enable a print run of this book, and so today I’m launching a campaign to print “The Essential Guide to Electronics in Shenzhen”.

As a maker and a writer, the process of creating the book is a pleasure, but I’ve come to dread the funding process. Today is like judgment day; after spending many months writing, I get to find out if my efforts are deemed worthy of your wallet. It’s compounded by the fact that funding a book is a chicken-and-egg problem; even though the manuscript is finished, no copies exist, so I can’t send it to reviewers for validating opinions. Writing the book consumes only time; but printing even a few bound copies for review is expensive.

In this case, the minimum print run is 1,000 copies. I’m realistic about the market for this book – it’s most useful for people who have immediate plans to visit Shenzhen, and so over the next 45 days I think I’d be lucky if I got a hundred backers. However, I don’t have the cash to finance the minimum print run, so I’m hoping I can convince you to purchase a copy or two of the book in the off-chance you think you may need it someday. If I can hit the campaign’s minimum target of $10,000 (about 350 copies of the book), I’ll still be in debt, but at least I’ll have a hope of eventually recovering the printing and distribution costs.

The book itself is the guide I wish I had a decade ago; you can have a brief look inside here. It’s designed to help English speakers make better use of the market. The bulk of the book consists of dozens of point-to-translate guides relating to electronic components, tools, and purchasing. It also contains supplemental chapters to give a little background on the market, getting around, and basic survival. It’s not meant to replace a travel guide; its primary focus is on electronics and enabling the user to achieve better and more reliable results despite the language barriers.

Below is an example of a point-to-translate page:

For example, the above page focuses on packaging. Once you’ve found a good component vendor, sometimes you find your parts are coming in bulk bags, instead of tape and reel. Or maybe you just need the whole thing put in a shipping box for easy transportation. This page helps you specify these details.

I’ve put several pages of the guide plus the whole sales pitch on Crowd Supply’s site; I won’t repeat that here. Instead, over the coming month, I plan to post a couple stories about the “making of” the book.

The reality is that products cost money to make. Normally, a publisher takes the financial risk to print and market a book, but I decided to self-publish because I wanted to add a number of custom features that turn the book into a tool and an experience, rather than just a novel.

The most notable, and expensive, feature I added are the pages of blank maps interleaved with business card and sample holders.

Note that in the pre-print prototype above, the card holder pages are all in one section, but the final version will have one card holder per map.

When comparison shopping in the market, it’s really hard to keep all the samples and vendors straight. After the sixth straight shop negotiating in Chinese over the price of switches or cables, it’s pretty common that I’ll swap a business card, or a receipt will get mangled or lost. These pages enable me to mark the location of a vendor, associate it with a business card and pricing quotation, and if the samples are small (like the LEDs in the picture above) keep the sample with the whole set. I plan on using a copy of the book for every project, so a couple years down the road if someone asks me for another production run, I can quickly look up my suppliers. Keeping the hand-written original receipts is essential, because suppliers will often honor the pricing given on the receipt, even a couple years later, if you can produce it. The book is designed to give the best experience for sourcing components in the Shenzhen electronic markets.

In order to accommodate the extra thickness of samples, receipts and business cards, the book is spiral-bound. The spiral binding is also convenient for holding a pen to take notes. Finally, the spiral binding also allows you to fold the book flat to a page of interest, allowing both the vendor and the buyer to stare at the same page without fighting to keep the book open. I added an elastic strap in the back cover that can be used as a bookmark, or to help keep the book closed if it starts to get particularly full.

I also added tabbed pages at the beginning of every major section, to help with quickly finding pages of interest. Physical print books enable a fluidity in human interaction that smartphone apps and eBooks often fail to achieve. Staring at a phone to translate breaks eye contact, and the vendor immediately loses interest; momentum escapes as you scroll, scroll, scroll to the page of interest, struggle with auto-correction on a tiny on-screen keyboard, or worse yet stare at an hourglass as pages load from the cloud. But pull out the book and start thumbing through the pages, the vendor can also see and interact with the translation guide. They become a part of the experience; it’s different, interesting, and keeps their attention. Momentum is preserved as both of you point at various terms on the page to help clarify the transaction.

Thus, I spent a fair bit of time customizing the physical design of the book to make it into a tool and an experience. I considered the human factors of the Shenzhen electronics market; this book is not just a dictionary. This sort of tweaking can only be done by working with the printer directly; we had to do a bit of creative problem solving to figure out a process that works to bring all these elements together that can also pump out books at a rate fast enough to keep it in the realm of affordability. Of course, the cost of these extra features are reflected in the book’s $35 cover price (discounted to $30 if you back the campaign now), but I think the book’s value as a sourcing and translation tool makes up for its price, especially compared to the cost of plane tickets. Or worse yet, getting the wrong part because of a failure to communicate, or losing track of a good vendor because a receipt got lost in a jumble of samples.

This all bring me back to the point of this post. Printing the book is going to cost money, and I don’t have the cash to print and inventory the book on my own. If you think someday you might go to Shenzhen, or maybe you just like reading what I write or how the cover looks, please consider backing the campaign. If I can hit the minimum funding target in the next 45 days, it will enable a print run of 1,000 books and help keep it in stock at Crowd Supply.

Thanks, and happy hacking!

by bunnie at February 03, 2016 04:13 PM

ZeptoBARS

Noname TL431 : weekend die-shot

Yet another noname TL431.
Die size 730x571 µm.


February 03, 2016 05:50 AM

January 31, 2016

Harald Welte

On the OpenAirInterface re-licensing

In the recent FOSDEM 2016 SDR Devroom, the Q&A session following a presentation on OpenAirInterface touched the topic of its controversial licensing. As I happen to be involved deeply with Free Software licensing and Free Software telecom topics, I thought I might have some things to say about this topic. Unfortunately the Q&A session was short, hence this blog post.

As a side note, the presentation was actually certainly the least technical presentation in all of the FOSDEM SDR track, and that with a deeply technical audience. And probably the only presentation at all at FOSDEM talking a lot about "Strategic Industry Partners".

Let me also state that I actually have respect for what OAI/OSA has been and still is doing. I just don't think it is attractive to the Free Software community - and it might actually not be Free Software at all.

OpenAirInterface / History

Within EURECOM, a group around Prof. Raymond Knopp has been working on a Free Software implementation of all layers of the LTE (4G) system known as OpenAirInterface. It includes the physical layer and goes through to the core network.

The OpenAirInterface code was for many years under GPL license (GPLv2, other parts GPLv3). Initially the SVN repositories were not public (despite the license), but after some friendly mails one (at least I) could get access.

I've read through the code at several points in the past, it often seemed much more like a (quick and dirty?) proof of concept implementation to me, than anything more general-purpose. But then, that might have been a wrong impression on my behalf, or it might be that this was simply sufficient for the kind of research they wanted to do. After all, scientific research and FOSS often have a complicated relationship. Researchers naturally have their papers as primary output of their work, and software implementations often are more like a necessary evil than the actual goal. But then, I digress.

Now at some point in 2014, a new organization the OpenAirInterface Software Association (OSA) was established. The idea apparently was to get involved with the tier-1 telecom suppliers (like Alcatel, Huawei, Ericsson, ...) and work together on an implementation of Free Software for future mobile data, so-called 5G technologies.

Telecom Industry and Patents

In case you don't know, the classic telecom industry loves patents. Pretty much anything and everything is patented, and the patents are heavily enforced. And not just between Samsung and Apple, or more recently also Nokia and Samsung - but basically all the time.

One of the big reasons why even the most simple UMTS/3G capable phones are so much more expensive than GSM/2G is the extensive (and expensive) list of patents Qualcomm requires every device maker to license. In the past, this was not even a fixed per-unit royalty, but the license depended on the actual overall price of the phone itself.

So wanting to work on a Free Software implementation of future telecom standards with active support and involvement of the telecom industry obviously means contention in terms of patents.

Re-Licensing

The existing GPLv2/GPLv3 license of the OpenAirInterface code of course would have meant that contributions from the patent-holding telecom industry would have to come with appropriate royalty-free patent licenses. After all, of what use is it if the software is free in terms of copyright licensing, but then you still have the patents that make it non-free.

Now the big industry of course wouldn't want to do that, so the OSA decided to re-license the code-base under a new license.

As we apparently don't yet have sufficient existing Free Software licenses, they decided to create a new license. That new license (the OSA Public License V1.0 not only does away with copyleft, but also does away with a normal patent grant.

This is very sad in several ways:

  • license proliferation is always bad. Major experts and basically all major entities in the Free Software world (FSF, FSFE, OSI, ...) are opposed to it and see it as a problem. Even companies like Intel and Google have publicly raised concern about license Proliferation.
  • abandoning copyleft. Many people particularly from a GNU/Linux background would agree that copyleft is a fair deal. It ensures that everyone modifying the software will have to share such modifications with other users in a fair way. Nobody can create proprietary derivatives.
  • taking away the patent grant. Even the non-copyleft Apache 2.0 License the OSA used as template has a broad patent grant, even for commercial applications. The OSA Public License has only a patent grant for use in research context

In addition to this license change, the OSA also requires a copyright assignment from all contributors.

Consequences

What kind of effect does this have in case I want to contribute?

  • I have to sign away my copyright. The OSA can at any given point in time grant anyone whatever license they want to this code.
  • I have to agree to a permissive license without copyleft, i.e. everyone else can create proprietary derivatives of my work
  • I do not even get a patent grant from the other contributors (like the large Telecom companies).

So basically, I have to sign away my copyright, and I get nothing in return. No copyleft that ensures other people's modifications will be available under the same license, no patent grant, and I don't even keep my own copyright to be able to veto any future license changes.

My personal opinion (and apparently those of other FOSDEM attendees) is thus that the OAI / OSA invitation to contributions from the community is not a very attractive one. It might all be well and fine for large industry and research institutes. But I don't think the Free Software community has much to gain in all of this.

Now OSA will claim that the above is not true, and that all contributors (including the Telecom vendors) have agreed to license their patents under FRAND conditions to all other contributors. It even seemed to me that the speaker at FOSDEM believed this was something positive in any way. I can only laugh at that ;)

FRAND

FRAND (Fair, Reasonable and Non-Discriminatory) is a frequently invoked buzzword for patent licensing schemes. It isn't actually defined anywhere, and is most likely just meant to sound nice to people who don't understand what it really means. Like, let's say, political decision makers.

In practise, it is a disaster for individuals and small/medium sized companies. I can tell you first hand from having tried to obtain patent licenses from FRAND schemes before. While they might have reasonable per-unit royalties and they might offer those royalties to everyone, they typically come with ridiculous minimum annual fees.

For example let's say they state in their FRAND license conditions you have to pay 1 USD per device, but a minimum of USD 100,000 per year. Or a similarly large one-time fee at the time of signing the contract.

That's of course very fair to the large corporations, but it makes it impossible for a small company who sells maybe 10 to 100 devices per year, as the 100,000 / 10 then equals to USD 10k per device in terms of royalties. Does that sound fair and Non-Discriminatory to you?

Summary

OAI/OSA are trying to get a non-commercial / research-oriented foot into the design and specification process of future mobile telecom network standardization. That's a big and difficult challenge.

However, the decisions they have taken in terms of licensing show that they are primarily interested in aligning with the large corporate telecom industry, and have thus created something that isn't really Free Software (missing non-research patent grant) and might in the end only help the large telecom vendors to uni-directionally consume contributions from academic research, small/medium sized companies and individual hackers.

by Harald Welte at January 31, 2016 11:00 PM

January 27, 2016

January 26, 2016

Michele's GNSS blog

uBlox: Galileo, anti-jamming and anti-spoofing firmware

Just downloaded the firmware upgrade for flash-based M8 modules from uBlox.
Flashed it in no time.
The result of UBX-MON-VER is now:



So checked Galileo in CFG-GNSS:



Result :)



Incidentally, there is a "spoofing" flag now as well :O



Don't dare trying this on M8T...

by noreply@blogger.com (Michele Bavaro) at January 26, 2016 10:42 PM

January 22, 2016

Bunnie Studios

Novena on the Ben Heck Show

I love seeing the hacks people do with Novena! Thanks to Ben & Felix for sharing their series of adventures! The custom case they built looks totally awesome, check it out.

by bunnie at January 22, 2016 04:37 PM

January 21, 2016

Bunnie Studios

Name that Ware January 2016

The Ware for January 2016 is shown below.

I just had to replace the batteries on this one, so while it was open I tossed it in the scanner and figured it would make a fun and easy name that ware to start off the new year.

by bunnie at January 21, 2016 03:37 PM

Winner, Name that Ware December 2015

The ware for December 2015 was a Thurlby LA160 logic analyzer. Congrats to Cody Wheeland for nailing it! email me for your prize. Also, thanks to everyone for sharing insights as to why the PCBs developed ripples of solder underneath the soldermask. Fascinating stuff, and now I understand why in PCB processing there’s a step of stripping the tin plate before applying the soldermask.

by bunnie at January 21, 2016 03:37 PM

January 19, 2016

Free Electrons

Seminar “Porting Linux on an ARM board”, materials available

Porting Linux on an ARM boardOn December 10th 2015, Free Electrons engineer Alexandre Belloni gave a half-day seminar on the topic of Porting Linux on an ARM board in Toulouse, France. This seminar covers topics like porting the bootloader, understanding the concept of the Device Tree, writing Linux device drivers and more. With ~50 persons from various companies attending and lots of questions from the audience, this first edition has been very successful, which shows an increasing interest for using Linux on ARM platforms in the industry.

We are now publishing the 220 slides materials from this seminar, available in PDF format. Like all our training materials, this material is published under the Creative Commons BY-SA 3.0 license, which allows everyone to re-use it for free, provided the derivative works are released under the same license. We indeed re-used quite extensively parts of our existing training materials for this half-day seminar.

We plan to give this half-day seminar in other locations in France in 2016. Contact us if you are interested in organizing a similar seminar in your area (we are happy to travel!).

by Thomas Petazzoni at January 19, 2016 01:18 PM

ELCE 2015 conference videos available

ELC Europe 2015 logoAs often in the recent years, the Linux Foundation has shot videos of most of the talks at the Embedded Linux Conference Europe 2015, in Dublin last October.

These videos are now available on YouTube, and individual links are provided on the elinux.org wiki page that keeps track of presentation materials as well. You can also find them all through the Embedded Linux Conference Europe 2015 playlist on YouTube.

All this is of course a priceless addition to the on-line slides. We hope these talks will incite you to participate to the next editions of the Embedded Linux Conference, like in San Diego in April, or in Berlin in October this year.

In particular, here are the videos from the presentations from Free Electrons engineers.

Alexandre Belloni, Supporting multi-function devices in the Linux kernel

Kernel maintainership: an oral tradition

Tutorial: learning the basics of Buildroot

Our CTO Thomas Petazzoni also gave a keynote (Linux kernel SoC mainlining: Some success factors), which was well attended. Unfortunately, like for some of the other keynotes, no video is available.

by Michael Opdenacker at January 19, 2016 01:06 PM

January 15, 2016

Bunnie Studios

Making of the Novena Heirloom

Make is hosting a wonderfully detailed article written by Kurt Mottweiler about his experience making the Novena Heirloom laptop. Check it out!


by bunnie at January 15, 2016 05:39 PM

Free Electrons

Device Tree on ARM article in French OpenSilicium magazine

Our French readers are most likely aware of the existence of a magazine called OpenSilicium, a magazine dedicated to embedded technologies, with frequent articles on platforms like the Raspberry Pi, the BeagleBone Black, topics like real-time, FPGA, Android and many others.

Open Silicium #17

Issue #17 of the magazine has been published recently, and features a 14-pages long article Introduction to the Device Tree on ARM, written by Free Electrons engineer Thomas Petazzoni.

Open Silicium #17

Besides Thomas article, many other topics are covered in this issue:

  • A summary of the Embedded Linux Conference Europe 2015 in Dublin
  • Icestorm, a free development toolset for FPGA
  • Using the Armadeus APF27 board with Yocto
  • Set up an embedded Linux system on the Zynq ZedBoard
  • Debugging with OpenOCD and JTAG
  • Usage of the mbed SDK on a small microcontroller, the LPC810
  • From Javascript to VHDL, the art of writing synthetizable code using an imperative language
  • Optimization of the 3R strems decompression algorithm

by Thomas Petazzoni at January 15, 2016 09:16 AM

Free Electrons at FOSDEM and the Buildroot Developers Meeting

FOSDEM 2016The FOSDEM conference will take place on January 30-31 in Brussels, Belgium. Like every year, there are lots of interesting talks for embedded developers, starting from the Embedded, Mobile and Automotive Devroom, but also the Hardware track, the Graphics track. Some talks of the IoT and Security devrooms may also be interesting to embedded developers.

Thomas Petazzoni, embedded Linux engineer and CTO at Free Electrons, will be present during the FOSDEM conference. Thomas will also participate to the Buildroot Developers Meeting that will take place on February 1-2 in Brussels, hosted by Google.

by Thomas Petazzoni at January 15, 2016 08:52 AM

January 14, 2016

Free Electrons

Linux 4.4, Free Electrons contributions

Linux 4.4 is the latest releaseLinux 4.4 has been released, a week later than the normal schedule in order to allow kernel developers to recover from the Christmas/New Year period. As usual, LWN has covered the 4.4 cycle merge window, in two articles: part 1 and part 2. This time around, KernelNewbies has a nice overview of the Linux 4.4 changes. With 112 patches merged, we are the 20th contributing company by number of patches according to the statistics.

Besides our contributions in terms of patches, some of our engineers have also become over time maintainers of specific areas of the Linux kernel. Recently, LWN.net conducted a study of how the patches merged in 4.4 went into the kernel, which shows the chain of maintainers who pushed the patches up to Linus Torvalds. Free Electrons engineers had the following role in this chain of maintainers:

  • As a co-maintainer of the Allwinner (sunxi) ARM support, Maxime Ripard has submitted a pull request with one patch to the clock maintainers, and pull requests with a total of 124 patches to the ARM SoC maintainers.
  • As a maintainer of the RTC subsystem, Alexandre Belloni has submitted pull requests with 30 patches directly to Linus Torvalds.
  • As a co-maintainer of the AT91 ARM support, Alexandre Belloni has submitted pull requests with 46 patches to the ARM SoC maintainers.
  • As a co-maintainer of the Marvell EBU ARM support, Gregory Clement has submitted pull requests with a total of 33 patches to the ARM SoC maintainers.

Our contributions for the 4.4 kernel were centered around the following topics:

  • Alexandre Belloni continued some general improvements to support for the AT91 ARM processors, with fixes and cleanups in the at91-reset, at91-poweroff, at91_udc, atmel-st, at91_can drivers and some clock driver improvements.
  • Alexandre Belloni also wrote a driver for the RV8803 RTC from Microcrystal.
  • Antoine Ténart added PWM support for the Marvell Berlin platform and enabled the use of cpufreq on this platform.
  • Antoine Ténart did some improvements in the pxa3xx_nand driver, still in preparation to the addition of support for the Marvell Berlin NAND controller.
  • Boris Brezillon did a number of improvements to the sunxi_nand driver, used for the NAND controller found on the Allwinner SoCs. Boris also merged a few patches doing cleanups and improvements to the MTD subsystem itself.
  • Boris Brezillon enabled the cryptographic accelerator on more Marvell EBU platforms by submitting the corresponding Device Tree descriptions, and he also fixed a few bugs found in the driver
  • Maxime Ripard reworked the interrupt handling of per-CPU interrupts on Marvell EBU platforms especially in the mvneta network driver. This was done in preparation to enable RSS support in the mvneta driver.
  • Maxime Ripard added support for the Allwinner R8 and the popular C.H.I.P platform.
  • Maxime Ripard enabled audio support on a number of Allwinner platforms, by adding the necessary clock code and Device Tree descriptions, and also several fixes/improvements to the ALSA driver.

The details of our contributions for 4.4:

by Thomas Petazzoni at January 14, 2016 02:32 PM

January 13, 2016

Michele's GNSS blog

NT1065 review

So I finally came about testing the NT1065… apologies for the lack of detail but I have done this in my very little spare time. Also, I would like to clarify that I am in no way affiliated to NTLab.

Chip overview

A picture speaks more than a thousand words.
Figure 1: NT1065 architecture
Things worth noting above are:
  • Four independent input channels with variable RF gain, so up to 4 distinct antennas can be connected;
  • Two LOs controlled by integer synthesizers, one per pair of channels, tuned respectively for the high and low RNSS band, but one can choose to route the upper LO to the lower pair and have 4 phase coherent channels
  • ADC sample rate derived from either LO through integer division
  • 4 independent image-reject mixers, IF filters and variable gain (with AGC) paths
  • Four independent outputs, either as a CMOS two bit ADC or analogue differential so one could
    • connect his/her own ADC or
    • phase-combine the IF outputs in a CRPA fashion prior to digitisation
  • standard SPI port control
Another important point for a hardware designer (I used to be a little bit of that) is this:
Figure 2: NT1065 application schematic
The pin allocation shows a 1 cm2 QFN88 (with 0.4mm pin step) with plenty of room between the pins and an optimal design for easy routing of the RF and IF channels. Packages like that aren’t easy to find nowadays for such complex RF ICs (everything is a BGA or WLCSP) but I love QFNs because they are easy to solder with a bit of SMD practice and can be “debugged” if the PCB layout is not perfect first time.

Evaluation kit overview

The evaluation kit presents itself like this:
Figure 3: NT1065 evaluation kit
One can see the RF inputs at the top, the external reference clock input on the left, the control interface on the right and the IF/digital part on the bottom. The large baluns (for differential to single ended conversion) were left unpopulated for me as I don’t use redpitaya (yet?). The control board is the same used for the NT1036.
In configured the evaluation kit to be powered by the control board (it was an error, see later) and connected the ADC outputs and clock to the Spartan6 on the SdrNav40, used here simply as USBHS DAQ. In total, there is one clock like and 8 data lines (4 pairs of SIGN/MAGN, one per channel).
The IF filters act on the Lower Side Band (LSB) or the Upper Side Band (USB) for respectively high and low injection mixing and can be configured for a cutoff frequency between 10 and 35 MHz. Thus, bandwidths of up to 30 MHz per signal can be accommodated and the minimum ADC sampling rate should be around 20 Msps. 20 MByte/sec are not easy to handle for a USBHS controller, so I will look into other more suitable  (but still cost effective) DAQ options to evaluate the front-end. In the meantime, I could do a lot with 32MByte/sec of the FX2LP by testing either 2 channels only with 2 bit or all the 4 channels with 1 bit and compressing nibbles into bytes (halving the requested rate).
The evaluation software is a single window, very simple and intuitive to use but very effective.
Figure 4: Evaluation software
The software comes with several sample configuration files that can be very useful to quickly start evaluating the chip.

Tests

All my tests used a good 10MHz CMOS reference.

GPS L1

The first test was GPS L1 in high injection mode setting the first LO to 1590 MHz (R1=1, N1=159), leading to an IF of -14.58 MHz, a filter bandwidth of about 28 MHz and a sampling frequency of 53 Msps (K1/2=15). I streamed one minute to the disk and verified correct operation.
Figure 5:GPS L1 PSD (left) and histogram+time series (right)
Figure 6: G30 correlation of L1 code detail (left) and all satellites (right).

GPS L1/L5

When performing this test I bumped into a hardware problem. If the control board powers the NT1065 evaluation kit with its internal 3.3V reference, the power line is gated by a small resistor thus the voltage depends on the current drawn by the chip (undesirable!). Enabling the second channel in the GUI made the chip draw more current so the voltage on the evaluation kit decreased away from the SdrNav40 one which was steady at 3.3V. Level mismatch created unreliability in reading the digital levels and failure to transfer meaningful data. So I powered the evaluation kit with the SdrNav40 3.3V voltage reference and everything was happy again.
In this configuration L1 is again at -14.58 MHz (1590 MHz high side injection) and L5 is on the third channel (low RNSS) at -13.55 MHz (R2=1, N2=119 for 1190 MHz high side injection). To be noted the relative large spike in the spectrum at 1166 MHz, not an obvious harmonic so it could be some unwanted emission from neighbour equipment.
Figure 7: L5 PSD (left) and histogram+time series (right)
Figure 8: G30 correlation of L5 code detail (left) and all satellites (right).
Interestingly, the Matlab satellite search algorithm returns respectively for L1 and L5:
Searching GPS30 -> found: Doppler +4500.0 CodeShift:  35226 xcorr: 12502.4
Searching GPS30 -> found: Doppler +3000.0 CodeShift:  35226
The above outputs show coarse but correctly scaled Doppler [Hz] and a perfect match in code delay [samples] (just by chance spot on).

4x GPS L1

In this case I enabled all 4 channels and shared the LO amongst them all. Unfortunately I cannot show the 6dB increase in gain when steering a beam to a satellite as all RF inputs were connected to the same antenna and -being the noise the same- steering the phase is useless. However, it is possible to verify how the phase amongst the channels is perfectly coherent (requirement for an easy CRPA).
The signals were conveniently brought to baseband, filtered and decimated by 5, resulting into a 10.6 MHz sampling rate. As one can see below the power was well matched and the inter-channel carrier phase is extremely steady and constant over the 60 seconds capture time. In the very case of zero-baseline, one can easily check that such phase difference is also the same across different satellites (as it does not depend on geometry but just on different path lengths beyond the splitter).
Figure 9:PSD of the IF obtained from the 4 channels and relative carrier phase

GPS L1 + Glonass G1 + GPS L5 + Glonass G3

I wanted to verify here reception of Glonass G1 on the second channel (upper side band). At this point it had become merely a formality. Glonass CH0 is at +12 MHz so the acquisition returned correctly as shown below.  Of course 53 Msps for a BPSK(0.5) is a bit of an overkill :)
Figure 10: Glonass acquisition all satellites (left) and CH-5 detail (right).

GPS L1 + Beidou B1 + GPS L5 + Galileo E5b

The case for GPS and Beidou was a bit more challenging as the distance between L1 and B1 is only 14.322 MHz, thus the IFs must be around 7 MHz. I decided to set the LO to 1570 MHz (R1=1, N1=157). So GPS went upper side band on channel 1 at +5.42 MHz IF. Beidou consequently went on lower side band on channel 2 at -8.902 MHz. Channel 3 and 4 were enabled with LO2 set at 1190 MHz: in the middle between E5a and E5b in order to verify AltBOC reception.
As 1570 MHz is a nasty frequency to generate a round sampling frequency value I decide to derive the clock from LO2 using K2/2 = 10 and therefore stream at 59.5 Msps. As one can see below the L1 peak has moved very close to baseband now and the sampling frequency is quite exceeding the Nyquist limit.
Figure 11: GPS acquisition with close-in IF
Figure 12: Beidou B1 sprectrum (MSS on the right) and acquisition (incidentally also showing IGSO generation 3 satellites C31 and C32).
Figure 13: E5a acquisition of E30
Figure 14: E5b acquisition of E30, showing a perfect match in code delay with E5a as one would expect.

Conclusions and work to do

I am very suprised of how little time took me from unboxing the kit to sucessfully using it to acquire all the GNSS signals I could think and test all configurations. Of course I had the former experience with the NT1036 but this time I had the perception of a solid, feature-rich, plug-and-play IC.
In my todo list there is the extension of this post with a home-made measurement of channel isolation.. and the way I plan to do it should be interesting to the readers :)

by noreply@blogger.com (Michele Bavaro) at January 13, 2016 09:49 PM

January 11, 2016

Altus Metrum

Altos1.6.2

AltOS 1.6.2 — TeleMega v2.0 support, bug fixes and documentation updates

Bdale and I are pleased to announce the release of AltOS version 1.6.2.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, STMF042, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a minor release of AltOS, including support for our new TeleMega v2.0 board, a small selection of bug fixes and a major update of the documentation

AltOS Firmware — TeleMega v2.0 added

The updated six-channel flight computer, TeleMega v2.0, has a few changes from the v1.0 design:

  • CC1200 radio chip instead of the CC1120. Better receive performance for packet mode, same transmit performance.

  • Serial external connector replaced with four PWM channels for external servos.

  • Companion pins rewired to match EasyMega functionality.

None of these change the basic functionality of the device, but they do change the firmware a bit so there's a new package.

AltOS Bug Fixes

We also worked around a ground station limitation in the firmware:

  • Slow down telemetry packets so receivers can keep up. With TeleMega v2 offering a fast CPU and faster radio chip, it was overrunning our receivers so a small gap was introduced between packets.

AltosUI and TeleGPS applications

A few minor new features are in this release

  • Post-flight TeleMega and EasyMega orientation computations were off by a factor of two

  • Downloading eeprom data from flight hardware would bail if there was an error in a data record. Now it keeps going.

Documentation

I spent a good number of hours completely reformatting and restructuring the Altus Metrum documentation.

  • I've changed the source format from raw docbook to asciidoc, which has made it much easier to edit and to use docbook features like links.

  • The css moves the table of contents out to a sidebar so you can navigate the html format easily.

  • There's a separate EasyMini manual now, constructed by taking sections from the larger manual.

by keithp's rocket blog at January 11, 2016 05:03 AM

January 03, 2016

Harald Welte

Conferences I look forward to in 2016

While I was still active in the Linux kernel development / network security field, I was regularly attending 10 to 15 conferences per year.

Doing so is relatively easy if you earn a decent freelancer salary and are working all by yourself. Running a company funded out of your own pockets, with many issues requiring (or at least benefiting) from personal physical presence in the office changes that.

Nevertheless, after some years of being less of a conference speaker, I'm happy to see that the tide is somewhat changing in 2016.

After my talk at 32C3, I'm looking forward to attending (and sometimes speaking) at events in the first quarter of 2016. Not sure if I can keep up that pace in the following quarters...

FOSDEM

FOSDEM (http://fosdem.org/2016) a classic, and I don't even remember for how many years I've been attending it. I would say it is fair to state it is the single largest event specifically by and for community-oriented free software developers. Feels like home every time.

netdevconf 1.1

netdevconf (http://www.netdevconf.org/1.1/) is actually something I'm really looking forward to. A relatively new grass-roots conference. Deeply technical, and only oriented towards Linux networking hackers. The part of the kernel community that I've known and loved during my old netfilter days.

I'm very happy to attend the event, both for its technical content, and of course to meet old friends like Jozsef, Pablo, etc. I also read that Kuninhiro Ishiguro will be there. I always adored his initial work on Zebra (whose vty code we coincidentally use in almost all osmocom projects as part of libosmovty).

It's great to again see an event that is not driven by commercial / professional conference organizers, high registration fees, and corporate interests. Reminds me of the good old days where Linux was still the underdog and not mainstream... Think of Linuxtag in its early days?

Linaro Connect

I'll be attending Linaro Connect for the first time in many years. It's a pity that one cannot run various open source telecom protocol stack / network element projects and a company and at the same time still be involved deeply in Embedded Linux kernel/system development. So I'll use the opportunity to get some view into that field again - and of course meet old friends.

OsmoDevCon

OsmoDevCon is our annual invitation-only developer meeting of the Osmocom developers. It's very low-profile, basically a no-frills family meeting of the Osmocom community. But really great to meet with all of the team and hearing about their respective experiences / special interest topics.

TelcoSecDay

This (https://www.troopers.de/events/troopers16/580_telcosecday_2016_invitation_only/) is another invitation-only event, organized by the makers of the TROOPERS conference. The idea is to make folks from the classic Telco industry meet with people in IT Security who are looking at Telco related topics. I've been there some years ago, and will finally be able to make it again this year to talk about how the current introduction of 3G/3.5G into the Osmocom network side elements can be used for security research.

by Harald Welte at January 03, 2016 11:00 PM

January 01, 2016

Michele's GNSS blog

Happy begin of 2016

2015 just passed. I don't write much here anymore as time has become a very precious resource and my job imposes tight limitations on what one can or cannot write on the web.
The yearly update will quickly cover constellation the status, some info on low cost RTK developments and some more SDR thoughts (although the most significant article in that respect will come soon in another post).

Constellation updates


As retrieved from Tomoji Takasu's popular diary, 2015 has seen the following launches:

Date/Time (UTC)     Satellite             Orbit   Launcher        Launch Site               Notes
2015/03/25 18:36    GPS Block IIF-9       MEO     Delta-IV        Cape Canaveral, US        G26
2015/03/27 21:46    Galileo FOC-3, 4      MEO     Soyuz ST-B      Kourou, French Guiana     E26, E22
2015/03/28 11:49    IRNSS-1D              IGSO    PSLV            Satish Dhawan SC, India   111.75E
2015/03/31 13:52    BeiDou-3 I1           IGSO    Long March 3C   Xichang, China            C15
2015/07/15 15:36    GPS Block IIF-10      MEO     Atlas-V         Cape Canaveral, US        G08
2015/07/25 12:28    BeiDou-3 M1-S, M2-S   MEO     Long March 3B   Xichang, China            ?
2015/09/10 02:08    Galileo FOC-5, 6      MEO     Soyuz ST-B      Kourou, French Guiana     E24, E30
2015/09/29 23:23    BeiDou-3 I2-S         IGSO    Long March 3B   Xichang, China            ?
2015/10/30 16:13    GPS Block IIF-11      MEO     Atlas-V         Cape Canaveral, US        G10
2015/11/10 21:34    GSAT-15 (GAGAN)       GEO     Ariane 5        Kourou, French Guiana     93.5E
2015/12/17 11:51    Galileo FOC-8, 9      MEO     Soyuz ST-B      Kourou, French Guiana     E??, E??


GPS 

GPS replaced three IIA birds with brand new IIF, as one can see Figure 1. The number of GPS satellites transmittiing L5 raised now to 11 (as one can also verify with UNAVCO). The number of GPS with L2C is instead 18 (quite close to a nominal constellation!). The question is now how GPS will proceed in 2016 and beyond, having seen the delays that afffect OCX and in general the bad comments (see e.g. 1 and 2) on the progress of modernisation of GPS.
Figure 1: One year of GPS observations, obtained using a bespoke tool from the freely available data courtesy of the IGS network.
Glonass

Stable situation here, as seen in Figure 2, with the only exception of PRN 17 going offline in mid-October (perhaps soon to be replaced according to the table of upcoming launches)
Figure 2: One year of Glonass observations
Galileo

The situation has been very "dynamic" for Galileo but is indeed very promising as seen in Figure 3. The latest launch went well and we can hope for several signals in space in 2016: hopefully the year that Galileo will make its appeareance in most consumer devices. Incidentally, there are as of today 8 satellites broadcasting E5a.
Figure 3: One year of Galileo observations, courtesy of the MGEX project.
Beidou 

Also for Beidou the situation is rapidly evolving as can be seen in Figure 4. My colleague James and I did a detailed study on the new generation satellites and published part of it on GPSWorld. Indeed 3rd generation test birds host a very versatile payload that allows them to broadcast modern navigation signals on three frequencies. Incidentally C34 and C33 (the two MEO space vehicles) also broadcast a QPSK on E5a.
Figure 4: One year of Beidou observations.

Low cost RTK

An awful lot of progress here, with NVS, Skytraq, Geostar Navigation and uBlox releasing multi-constellation single frequency products for RTK.

NVS released two products with onboard GPS+Glonass (upgradeable to Galileo) RTK engine: NV08C-RTK (for standard base-rover configuration) and NV08C-RTK-A (with added dual antenna heading determination for precision AG). Rumors say that they both run an highly reworked version of RTKLIB on a LPC32xx microcontroller (ARM926EJ-S processor with VFP unit). The price is not public, but again rumors suggest it is a few hundreds of EUR a piece (in small quantities) for the single receiver version. I got my hands on a couple of boards and build a simple adapter board to be able to use them with a standard laptop and a wireless module fitting the Xbee socket (including this one).



Skytraq has built on its Navspark initiative and came out with two groundbreaking products S2525F8-RTK
and S2525F8-BD-RTK. The -I shall say- provocative price of 50 and 150 USD respectively sets a new threshold very hard to beat. Skytraq has also done extensive analysis on the performance of GPS only versus GPS+Beidou single frequency RTK, e.g. here and here. In Asia the dual constellation (2x CDMA) single frequency (1540x and 1526x f0)  RTK shows incredibly promising results, mainly due to the impressive number of birds in view. I got my hands on a couple of plug&play evaluation kits and already verified the sub-minute convergence time to fix in zero baseline and good visibility conditions.



Geostar Navigation has also recently released the GeoS-3MR which is practically identical in terms of capability to the GeoS-3 and GeoS-3M, but has a factory setting such that the most recent firmware provides carrier phase for both GPS and Glonass. Although Glonass phase is not calibrated, last month statements from Tomoji suggest that this feature could be incorporated in v2.4.3 anyway.
A few years ago I had designed and produced some carrier boards for GeoS-3M so I could just place an order for a few raw-capable chips (at 25 USD each) and test them out. The software provided by the manufacturer (Demo3 and toRNX) allows to extract Rinex observations from the binary logs. At the time I had also developed some parser code for RTKLIB but I now found out that it has a small issue.. I don't feel like reinstalling C++ Builder just to fix it but anyone please feel free to take that code and push it to v2.4.3.


 
uBlox released the M8T module with raw data support for two simultaneous constellations.. very interesting chip but I have the feeling that some big change is going to happen there since the Company is focussing much more on comms than nav lately.

ComNav offers the K500 OEM board also for less than 300 EUR in small quantities.

In view of all the above, one could expect that initiatives like Reach® and Piksi® will surely have to reconsider their approach. In particular, things based on Edison® are facing the competition of ARM-based modules which are perfectly capable of RTK and are accessible at a much lower price (e.g. see Raspberry Pi Zero and C.H.I.P.  SwiftNav has recently release an update but unless they go multi-frequency rapidly the competition will give them very hard times.

Finally, low cost dual frequency cards such as Precis-L1L2 have started to appear. Apparently based on a Chinese Unicorecomm OEM board it offers multi-constellation multi-frequency RTK at 800 USD.

SDR

Over the holydays I assembled the test-bench for the NT1065, the latest multi-constellation front-end from NTLAB. The setup again is very clean and builds on lessons learnt with the NT1036: I will present the first results soon, in the next post.


Since the chip has the native capability of streaming about 60 MBytes/sec (4 channels ~15 MHz IF output at 2-bit per sample) a USB2.0 transceiver is sub-optimal as limited to about 40 Mbytes/sec.
I started investigating the FT601 USB3.0 trasceiver from FTDI and the KSZ9031RNX GigETH transceiver from Micrel, as seen in the beautiful development from Peter Monta. Also, the availability of the FX3 Explorer Kit is tempting as easy mid-step solution. There are many SDR boards, but I would just need a cheap programmable FPGA+GigETH/USBSS and I cannot find it... Parallella seems the best candidate with its Porcupine to use and some software to develop of course (I am surprised nobody published a GPIO-GigEth streamer software with Parallella yet). Ettus and Avnet are much ahead with powerful SDR platforms (e.g. the B210 and the picoZed SDR SOM) but there is what feels a steep lerning curve to use them. Perhaps it is time again to go design something?
In the meantime, I am watching the pcDuino3 Nano Lite and the Odroid XU4 as cheap NAS solutions to efficiently store long snapshots of IF data.

by noreply@blogger.com (Michele Bavaro) at January 01, 2016 10:11 PM

December 30, 2015

Harald Welte

32C3 is over, GSM and GPRS was running fine, osmo-iuh progress

The 32C3 GSM Network

32C3 was great from the Osmocom perspective: We could again run our own cellular network at the event in order to perform load testing with real users. We had 7 BTSs running, each with a single TRX. What was new compared to previous years:

  • OsmoPCU is significantly more robust and stable due to the efforts of Jacob Erlbeck at sysmocom. This means that GPRS is now actually still usable in severe overload situations, like 1000 subscribers sharing only very few kilobits. Of course it will be slow, but at least data still passes through as much as that's possible.
  • We were using half-rate traffic channels from day 2 onwards, in order to enhance capacity. Phones supporting AMR-HR would use that, but then there are lots of old phones that only do classic HR (v1). OsmoNITB with internal MNCC handler supports TCH/H with HR and AMR for at least five years, but the particular combination of OsmoBTS + OsmoNITB + lcr (all master branches) was not yet deployed at previous CCC event networks so far.

Being forced to provide classic HR codec actually revealed several bugs in the existing code:

  • OsmoBTS (at least with the sysmoBTS hardware) is using bit ordering that is not compliant to what the spec says on how GSM-HR frames should be put into RTP frames. We didn't realize this so far, as handing frames from one sysmoBTS to another sysmoBTS of course works, as both use the same (wrong) bit ordering.
  • The ETSI reference implementation of the HR codec has lots of global/static variables, and thus doesn't really support running multiple transcoders in parallel. This is however what lcr was trying (and needing) to do, and it of course failed as state from one transcoder instance was leaking into another. The problem is simple, but the solution not so simple. If you want to avoid re-structuring the entire code in very intrusive ways or running one thread per transcoder instance, then the only solution was to basically memcpy() the entire data section of the transcoding library every time you switch the state from one transcoder instance to the other. It's surprisingly difficult to learn the start + size of that data section at runtime in a portable way, though.

Thanks to our resident voice codec expert Sylvain for debugging and fixing the above two problems.

Thanks also to Daniel and Ulli for taking care of the actual logistics of bringing + installing (+ later unmounting) all associated equipment.

Thanks furthermore to Kevin who has been patiently handling the 'Level 2 Support' cases of people with various problems ending up in the GSM room.

It's great that there is a team taking care of those real-world test networks. We learn a lot more about our software under heavy load situations this way.

osmo-iuh progress + talk

I've been focussing basically full day (and night) over the week ahead of Christmas and during Christmas to bring the osmo-iuh code into a state where we could do a end-to-end demo with a regular phone + hNodeB + osmo-hnbgw + osmo-sgsn + openggsn. Unfortunately I only got it up to the point where we do the PDP CONTEXT ACTIVATION on the signalling plane, with no actual user data going back and forth. And then, for strange reasons, I couldn't even demo that at the end of the talk. Well, in either case, the code has made much progress.

The video of the talk can be found at https://media.ccc.de/v/32c3-7412-running_your_own_3g_3_5g_network#video

meeting friends

The annual CCC congress is always an event where you meet old friends and colleagues. It was great talking to Stefan, Dimitri, Kevin, Nico, Sylvain, Jochen, Sec, Schneider, bunnie and many other hackers. After the event is over, I wish I could continue working together with all those folks the rest of the year, too :/

Some people have been missed dearly. Absence from the CCC congress is not acceptable. You know who you are, if you're reading this ;)

by Harald Welte at December 30, 2015 11:00 PM

Video Circuits

RTL TV 40 ANS - Le Hit- Parade, Featuring EMS Spectron

A little snippet here of classic TV graphics from an anniversary on RTL, which includes some video mixer feedback effects and some very familiar EMS Spectron/Spectre shapes, blink and you will miss them!



"La télévison Luxembourgeoise a célébré ses quarantes ans en 1995.Voici un extrait de la soirée qui c'est déroulée à la Villa Louvigny."




by Chris (noreply@blogger.com) at December 30, 2015 12:10 AM

December 21, 2015

Bunnie Studios

Name that Ware December 2015

The Ware for December 2015 is shown below.

This ware got me at “6502”. Thanks to DavidG Cape Town for contributing this specimen!

One question for the readers (separate from naming the ware!), it’s been something I’ve wondered about for decades. On the back side of this board, one can see ripples on the fatter traces. My original assumption is this is due to a problem with hot air leveling after the application of a solder finish to the bare copper board, before the soldermask is applied. However, the top side is almost entirely smooth, so clearly the process can supply a flatter finish.

So here’s my quandary: are the ripples intentional (for example, an attempt to increase current capacity by selectively thickening fat traces with a solder coating), or accidental (perhaps microscopic flaws in the soldermask allowing molten metal to seep under the soldermask during wave soldering)?

Been wondering about this since I was like 15 years old, but never got around to asking anyone…

Happy holidays to everyone! I’ll be at 32C3 (thankfully I have a ticket), haunting the fail0verflow table. Come enjoy a beer with me, I’m not (officially) giving any talks so I can actually sit back and enjoy the congress this year.

by bunnie at December 21, 2015 08:16 PM

Winner Name that Ware November 2015

The Ware for November 2015 was an RS-482 interface picomotor driver of unknown make and model, but probably similar to one of these. It’s designed to drive piezo (slip stick) motors; the circuits on board generate 150V waveforms at low current to drive a linear actuator with very fine positional accuracy.

This one was apparently a stumper, as several guessed it had something to do with motor control or positioning, but nobody put that together with the high voltage rated parts (yet with no heatsinking, so driving low currents) on the board to figure that it’s meant for piezo or possibly some other electrostatic (e.g. MEMS) actuators. Better luck next month!

by bunnie at December 21, 2015 08:16 PM

Elphel

X3D assemblies from any CAD

Converting mechanical assemblies to X3D models from STEP (ISO 10303) files

Like all manufacturing companies we use mechanical CAD program to design our products. We would love to use Free Software programs for that, but so far even FreeCAD has a warning on their download page “FreeCAD is under heavy development and might not be ready for production use”. We have to use proprietary tools, our choice was the program that natively runs on GNU/Linux we use on our computers. This program generates STEP files that we can send to virtually any machine shop (locally or overseas) and expect to receive the manufactured parts that match our design. For the last 6 years we kept the CAD models for all the camera parts on Elphel Wiki hoping they might be needed not only by the machine shops we order parts from, but also by our users to incorporate (or modify) our products in their systems.

All the mechanical CAD programs can export STEP, we can use this format for assemblies

The STEP file export is quite adequate for the production, but it would be convenient for our users (including ourselves) to be able to easily navigate through the complex assemblies. Theoretically STEP can handle assemblies too, but I’ve got an impression that the CAD program owners are not that interested in the interoperability – they want everybody to use their program, and the interoperability scope is limited to a simplified scheme: CAD(their) -> CAM(any) and the assembly structure is often lost when generating output files. When we tried to export Eyesis4π camera as a STEP file it got more than 0.5GB in size and when imported (even by the same program) it resulted in over 1800 solids without any hierarchy or even the part names. Additionally the colors were lost when the STEP file was imported back and it is understandable – CAD programs need to be able to produce STEP files (otherwise they would be completely useless), but importing requirements are more relaxed. Having no control over the proprietary program output we had to find a way to use the CAM files (in STEP format) in the other way than the CAD providers intended and recreate the assembly structure ourselves.

FreeCAD as the environment for model conversion

FreeCAD seemed to us as a best choice for the next step regardless to its “not ready for production” status as it has a great advantage of being FLOSS, and having excellent support for Python access to the functionality (through macros and a nice Python console). First I looked for a possibility to export data as X3D and was impressed that a FreeCAD macro that does that – export_x3d.py has less than 100 lines of code. It did not export colored faces of electronic components on the PCB, but that was something we could definitely fix ourselves.

Having working color output was a first step to a more ambitious project – feed the program with a library of STEP files of components and a flat STEP assembly file. The program should recognize each of the objects in the assembly by comparing it with the known parts, replace them with references to the library parts and provide translation and rotation. There are multiple ways how to deal with this task and I will describe what we did later in the post, in short – it just worked. We fed the program with a library of 800+ part files that we had (some custom, some just standard fasteners from McMaster), and the assembly file and it recognized almost all of the objects and correctly placed them, so Oleg Dzhimiev was able to start working on the viewer to navigate the models using the x3dom technology while I continued working on the converter.

Links to the converted models

Here is a link to Elphel Wiki page Elphel camera assemblies. Tis page opens multiple designs – they include the new NC393 camera models (for which we do not yet received all the mechanical parts) as well as our current products for which we already had the needed CAD files.

We had not tried to convert design data exported by other mechanical CAD software and it is interesting to know if this program can help users of other CAD systems. We tried to make it agnostic to the source of the STEP files, but it does require the possibility to export files with specified color of the faces (AP214 has this possibility while AP203 does not). Color information is anyway needed as a proxy for materials/finish to distinguish between different parts that have exactly the same geometry, we also use it to hint orientation of the parts in the assembly.

There are multiple ways how the program can be improved, but at least for our project it is already usable. And we hope it is not just for us.

Technical details

As soon as we verified that FreeCAD can import our STEP files and it is not that difficult to generate the X3D models we started freecad_x3d project at Github. The x3d_step_assy.py macro runs in FreeCAD and generates X3D files from the STEP input, the rest of the repository is the viewer for the produced models.

Indexing the STEP part files

The first thing the program does is it scans and indexes all the STEP models of the parts, saving the information that is needed for matching to the assembly objects. STEP opening in FreeCAD is a very slow process (especially in the GUI mode that is required to have access to the object color information), so this step is needed to significantly speed-up subsequent assembly files processing. The part invariant information such as center of gravity (center of volume to be precise) location, volume, surface area and gyration radii provided by FreeCAD. If the part has differently colored faces the centers for each color is recorded too. Additionally a list of up to 18 vertex coordinates is calculated and added – these vertices are tested to be inside (or near to) the objects in the assembly. Currently these vertices are selected as having maximal and minimal values for each of the 3 coordinates as well as their sums and differences.

Normally each part model consists of just one solid object, but in practice it is not always the case. The CAD program we use generates extra “tube” object for each thread, sometimes we do it intentionally like making a two-solid photographic UV protection filter as a frame and a glass. This allows us to selectively change solid/wireframe state when working in CAD program. Current implementation saves information about each solid in a part and places the largest (now by volume) solid first (at index 0), the matching uses only the first solid, and that leads to false-positive in reporting of the objects that do not have any matches to parts. “False” – because these unmatched objects will still appear in the X3D model as their are included in the individual part models. Removing such false positive objects from the report is definitely possible, but it was not a big hassle to manually inspect them in the FreeCAD 3D-view.

All this information is recorded in Python pickle format, one file for each STEP file. When program needs to process an assembly, it first verifies that each STEP part file has a corresponding pickle one an (re)calculates the ones that are either missing or outdated (older than the STEP model).

Generation of the X3D files for each part model

Next step after indexing of the STEP models of the parts is to generate individual parts in X3D format. Program uses the color information that exists after import in GUI mode for each object face and uses it in the generation of the X3D XML data. It wraps each object with X3D “Group” node to combine multiple possible objects in a part and to provide a bounding box information, and then adds the outmost “Transform” node with zero translation and rotation – it can be used for the viewer program to move rotate the object. Currently the viewer reads group bound box center and moves the top object in the opposite direction for convenient rotation. The imported STEP files may have large offsets of the models from the (0,0,0) point, if this is not corrected the viewer may try to rotate the object around the point that is far off-screen.

Similarly to the generation of the pickle files, program only generates part X3D models if they do not exist or are older than the input STEP files. We noticed that at this stage FreeCAD often segfaults (regardless of the version) and it seems to be related to the GUI. Luckily you only have to load this many files once, and if the FreeCAD crashes you may just restart it and the macro will continue generation of the new files.

Selection of the parts candidates for the assembly objects

Opening a complex assembly as STEP file in FreeCAD can take a while (one of our models was opening 40 minutes), so please be patient. The part matching take twice less time, so the program offers two options – use the currently active document in FreeCAD or start from the file path and open it.

When all the assembly data is available, the program indexes each object extracting parameters similar to those of the parts – volume, area, inertial properties, centers of each color (if present). Then it uses this data to create a list of parts-candidates for each assembly object, requiring that the orientation-invariant parameters of each object exported as a part of the assembly matches (to the configurable precision) that of the same part exported individually. If colors are available, the total area of each color is compared too, but match is allowed if only the shape is the same as CAD may allow to change the object color in the assembly making it different from that of the library part. If several parts match the assembly object then the better color match disqualifies other shape-only candidates, so it is possible to color-code the same-shape parts.

Matching of the assembly object to the part orientation

Next step of the assembly to parts decomposition is to determine the part position and orientation to match the assembly objects. In most cases there will be no more that one candidate for each object, but if there are several the program will try them all and use the first match. It is very easy to find the translation of the part – just use the vector between the already known centers of volumes, but it is more tricky to find the correct orientation. There are multiple ways how to match orientations, and the program can be definitely improved. We chose rather simple approach that requires modification of some parts, but is rather easy as the parts models are created by us. The number of parts that required modification is rather small, this modification has to be done once per part (not for each assembly) and the modification does not invalidate the model for CAM usage.

This approach uses the offsets of the “centers of gravity” of the faces of each color (even a single-colored object may have the center of all faces offset from the center of volume) and then the principal axes of gyration that are provided by FreeCAD. Color offsets are used first, then supplemented by the gyration axes, each step verifies that the vector is non-zero and the next one is not co-linear to the first. Only two orthogonal vectors are needed, the third one needed for rotational matrix is calculated as a cross product of the first two. Use of gyration axes even if all 3 have different gyration radii ad so are reliably calculated have ambiguity as they do not provide the sign, only the line of direction. The same asymmetrical object can be oriented in 4 different ways (alternating the sign of the two of the 3 axes) and the program tries each of them. Initially I tried to compare the volume of boolean intersection of the two objects that should be the same as the volume of a single object if they match, but for some of our STEP models FreeCAD refused to calculate intersection, so I used isinside() function instead that calculates if a given point is inside the object to the certain precision so can be used to verify that all of the set of vertices saved for the part object with the transformation matrix applied end up “inside” the assembly object (actually on the border). Unfortunately even that had exception – in one of the object one vertex was returning “False” with any tolerance, even larger that the object size. In that rare case the program tries to move the test point around by the same precision-long vector, and that modification worked, FreeCAD return “True” for the isinside() call.

When the color hints are required in the part models

Using just the gyration principal axes fails when the object has some symmetry (point or axis). Consider a regular socket head screw. Unless it is a really short one it will have one small and two equal large gyration radii, and the axis for the small gyration radius can be reliably found (it is just the regular axis of the screw) but the two perpendicular ones are arbitrary and may be different for the part and the assembly object. That will lead that the hex head will have incorrect orientation, but usually this hex hole orientation is unimportant. So here we slightly cheated – the test vertices selected for verification with isinside() are some of the outmost ones of the solid (we selected vertices that have maximal/minimal values of each of the coordinates and sums/differences of their pairs and all three) – the hex hole does not have any of them. Most of the fasteners we use are such socket head ones, this approach would not work for hex bolts and nuts – they need to have one of the hex faces colored.

And there are other objects that require some color hints in the part model, like a square plate having no or symmetrical holes, or a turned (round) part with the symmetrical holes in it – two of the gyration radii are the same and the corresponding axes can not be unambiguously determined. You may color one of the side faces of the square faces or color the inside of a hole to break a symmetry. If the part does not have individually selectable faces in the CAD program you may create a small colored cylinder or box, align it with one of the flat faces and boolean cut it from the object, and then boolean add it to it. The result object has the same shape for CAM, but it will have a colored square or circle on one of the faces – sufficient for unambiguous definition of the orientation.

Converting multi-level assemblies

The program can convert multi-level assemblies that contain sub-assemblies, and MC393F21 design includes such subassembly models. For this model I created proxy single-solids object in each subassembly (there are 3 used – 0393-07-02, 0393-07-03 and 0393-07-01 that in turn includes three of 0393-07-03), and when exporting the top model to STEP the actual content of the subassembly models was blanked, and the proxy objects were visible, so they were exported. The result STEP file was placed in a separate directory from the part files, and optional suffix (‘-ASSY’ by default) was added top the file name before the extension. Each subassembly was exported to STEP twice – once with only proxy object visible (that file is used to find matches in the higher level of assembly), result saved in the same parts step directory, and the second time it is exported as assembly (to a different directory and with optional suffix) with the proxy object blanked. Conversion of these complex assemblies should be performed bottom-up – first the lower level sub-assemblies, then the the ones that use them. The output X3D directory will have both partName.x3d (converted from a proxy object ) and partname-ASSY.x3d that has the actual model of the subassembly. The partname-ASSY.x3d files are not in the index and they do not have the source STEP files in the parts directory, so they are not used when matching objects in the assembly. When all the possible objects are matched and the program generates the model X3D file, it replaces inline references to partName.x3d to partname-ASSY.x3d if such files exist in the X3D directory.

by andrey at December 21, 2015 10:09 AM

ZeptoBARS

Dallas Semiconductor DS1000Z : weekend die-shot

Dallas Semiconductor DS1000Z - 5 tap delay line.
Die size 2074x1768 µm.


December 21, 2015 09:48 AM

Altus Metrum

TeleLaunchTwo

TeleLaunchTwo — A Smaller Wireless Launch Controller

I've built a wireless launch control system for NAR and OROC. Those are both complex systems with a single controller capable of running hundreds of pads. And, it's also complicated to build, with each board hand-made by elves in our Portland facility (aka, my office).

A bunch of people have asked for something simpler, but using the same AES-secured two-way wireless communications link, so I decided to just build something and see if we couldn't eventually come up with something useful. I think if there's enough interest, I can get some boards built for reasonable money.

Here's a picture of the system; you can see the LCO end in a box behind the pad end sitting on the bench.

Radio Link

Each end has a 35mW 70cm digital transceiver (so, they run in the 440MHz amateur band). These run at 19200 baud with fancy forward error correction and AES security to keep the link from accidentally (or maliciously) firing a rocket at the wrong time. Using a bi-directional link, we also get igniter continuity and remote arming information at the LCO end.

The LCO Box

In the LCO box, there's a lipo battery to run the device, so it can be completely stand-alone. It has three switches and a button -- an arming switch for each of two channels, a power switch and a firing button. The lipo can be charged by opening up the box and plugging it into a USB port.

The Pad Box

The pad box will have some cable glands for the battery and each firing circuit. On top, it will have two switches, a power switch and an arming switch. The board has two high-power FETs to drive the igniters. That should be more reliable than using a relay, while also allowing the board to tolerate a wider range of voltages -- the pad box can run on anything from 12V to 24V.

The Box

Unlike the OROC and NAR systems, these boards are both designed to fit inside a specific box, the Hammond 1554E, and use the mounting standoffs provided. This box is rated at NEMA 4X, which means it's fairly weather proof. Of course, I have to cut holes in the box, but I found some NEMA 4X switches, will use cable glands for the pad box wiring and can use silicone around the BNC connector. The result should be pretty robust. I also found a pretty solid-seeming BNC connector, which hooks around the edge of the board and also clips on to the board.

Safety Features.

There's an arming switch on both ends of the link, and you can't fire a rocket without having both ends armed. That provides an extra measure of safety while working near the pad. The pad switch is a physical interlock between the power supply and the igniters, so even if the software is hacked or broken, disarming the box means the igniters won't fire.

The LCO box beeps constantly when either arming switch is selected, giving you feedback that the system is ready to fire. And you can see on any LED whether the pad box is also armed.

by keithp's rocket blog at December 21, 2015 03:51 AM

December 05, 2015

Harald Welte

Volunteer for Openmoko.org USB Product ID maintenance

Back when Openmoko took the fall, we donated the Openmoko, Inc. USB Vendor ID to the community and started the registry of free Product ID allocations at http://wiki.openmoko.org/wiki/USB_Product_IDs

Given my many other involvements and constant overload, I've been doing a poor job at maintaining it, i.e. handling incoming requests.

So I'm looking for somebody who can reliably take care of it, including

  • reviewing if the project fulfills the criteria (hardware or software already released under FOSS license)
  • entering new allocations to the wiki
  • informing applicants of their allocation

The amount of work is actually not that much (like one mail per week), but it needs somebody to reliably respond to the requests in a shorter time frame than I can currently do.

Please let me know if you'd like to volunteer.

by Harald Welte at December 05, 2015 11:00 PM

Anyone interested in supporting SMPP interworking at 32C3?

Sylvain brought this up yesterday: Wouldn't it be nice to have some degree of SMS interfacing from OpenBSC/OsmoNITB to the real world at 32C3? It is something that we've never tried so far, and thus definitely worthy of testing.

Of course, full interworking is not possible without assigning public MSISDN to all internal subscribers / 'extensions' how we call them.

But what would most certainly work is to have at least outbound SMS working by means of an external SMPP interface.

The OsmoNITB-internal SMSC speaks SMPP already (in the SMSC role), so we would need to implement some small amount of glue logic that behaves as ESME (external SMS entity) towards both OsmoNITB as well as some public SMS operator/reseller that speaks SMPP again.

Now of course, sending SMS to public operators doesn't come for free. So in case anyone reading this has access to SMPP at public operators, resellers, SMS hubs, it would be interesting to see if there is a chance for some funding/sponsoring of that experiment.

Feel free to contact me if you see a way to make this happen.

by Harald Welte at December 05, 2015 11:00 PM

December 04, 2015

Harald Welte

python-libsmpp works great with OsmoNITB

Since 2012 we have support for SMPP in OsmoNITB (the network-in-the-box version of OpenBSC). So far I've only used it from C and Erlang code.

Yesterday I gave python-smpplib from https://github.com/podshumok/python-smpplib a try and it worked like a charm. Of course one has to get the details right (like numbering plan indication).

In case anyone is interested in interfacing OsmoNITB SMPP from python, I've put a working example to send SMS at http://cgit.osmocom.org/mncc-python/tree/smpp_test.py

by Harald Welte at December 04, 2015 11:00 PM

December 01, 2015

Harald Welte

Python tool to talk to OsmoNITB MNCC interface

I've been working on a small python tool that can be used to attach to the MNCC interface of OsmoNITB. It implements the 04.08 CC state machine with our MNCC primitives, including support for RTP bridge mode of the voice streams.

The immediate first use case for this was to be able to generate MT calls to a set of known MSISDNs and load all 14 TCH/H channels of a single-TRX BTS. It will connect the MT calls in pairs, so you end up with 7 MS-to-MS calls.

The first working version of the tool is available from

The code is pretty hacky in some places. That's partially due to the fact that I'm much more familiar in the C, Perl and Erlang world than in python. Still I thought it's a good idea to do it in python to enable more people to use/edit/contribute to it.

I'm happy for review / cleanup suggestion by people with more Python-foo than I have.

Architecturally, I decided to do things a bit erlang-like, where we have finite state machines in an actor models, and message passing between the actors. This is what happens with the GsmCallFsm()'s, which are created by the GsmCallConnector() representing both legs of a call and the MnccActor() that wraps the MNCC socket towards OsmoNITB.

The actual encoding/decoding of MNCC messages is auto-generated from the mncc header file #defines, enums and c-structures by means of ctypes code generation.

mncc_test.py currently drops you into a python shell where you can e.g. start more / new calls by calling functions like connect_call("7839", "3802") from that shell. Exiting the shell by quit() or Ctrl+C will terminate all call FSMs and terminate.

by Harald Welte at December 01, 2015 11:00 PM

November 30, 2015

Free Electrons

UN climate conference: switching to “green” electricity

Wind turbines in Denmark

The United Nations 2015 Climate Change Conference is an opportunity for everyone to think about contributing to the transition to renewable and sustainable energy sources.

One way to do that is to buy electricity that is produced from renewable resources (solar, wind, hydro, biomass…). With the worldwide opening of the energy markets, this should now be possible in most parts of the world.

So, with a power consumption between 4,000 and 5,000 kWh per year, we have decided to make the switch for our main office in Orange, France. But how to choose a good supplier?

Greenpeace turned out to be a very good source of information about this topic, comparing the offerings from various suppliers, and finding out which ones really make serious investments in renewable energy sources.

Here are the countries for which we have found Greenpeace rankings:
Australia France

If you find a similar report for your country, please let us know, and we will add it to this list.

Back to our case, we chose Enercoop, a French cooperative company only producing renewable energy. This supplier has by far the best ranking from Greenpeace, and stands out from more traditional suppliers which too often are just trading green certificates, charging consumers a premium rate without investing by themselves in green energy production.

The process to switch to a green electricity supplier was very straightforward. All we needed was an electricity bill and 15 minutes of time, whether you are an individual or represent a company. From now on, Enercoop will guarantee that for every kWh we consume from the power grid, they will inject the same amount of energy into the grid from renewable sources. There is no risk to see more power outages than before, as the national company operating and maintaining the grid stays the same.

It’s true our electricity is going to cost about 20% more than nuclear electricity, but at least, what we spend is going to support local investments in renewable energy sources, that don’t degrade the fragile environment that keeps us alive.

Your comments and own tips are welcome!

by Michael Opdenacker at November 30, 2015 10:37 AM

November 28, 2015

Bunnie Studios

Products over Patents

NPR’s Audrey Quinn from Planet Money explores IP in the age of rapid manufacturing by investigating the two-wheel self balancing scooter. When patent paperwork takes more time and resources than product production, more agile systems of idea sharing evolve to keep up with the new pace of innovation.

If the embedded audio player above isn’t working, try this link. Seems like the embed isn’t working outside the US…

by bunnie at November 28, 2015 11:20 PM

MLTalk with Joi Ito, Nadya Peek and me

I gave an MLTalk at the MIT Media Lab this week, where I disclose a bit more about the genesis of the Orchard platform used to build, among other things, the Burning Man sexually generated light pattern badge I wrote about a couple months back.

The short provocation is followed up by a conversation with Joi Ito, the Director of the Media Lab, and Nadya Peek, a renowned expert in digital fabrication from the CBA (and incidentally, the namesake of the Peek Array in the Novena laptop) about supply chains, digital fabrication, trustability, and things we’d like to see in the future of low volume manufacturing.

I figured I’d throw a link here on the blog to break the monotony of name that wares. Sorry for the lack of new posts, but I’ve been working on a couple of books and magazine articles in the past months (some of which have made it to print: IEEE Spectrum, Wired) which have consumed most of my capacity for creative writing.

by bunnie at November 28, 2015 12:50 AM

Name that Ware November 2015

This month’s ware is shown below:

And below are views of the TO-220 devices which are folded over in the top-down photo:

We continue this month with the campaign to get Nava Whiteford permission to buy a SEM. Thanks again to Nava for providing another interesting ware!

by bunnie at November 28, 2015 12:22 AM

Winner, Name that Ware October 2015

The ware for October 2015 was a Lecroy LT342L. Nava notes that it was actually manufactured by Iwatsu, but the ASICs on the inside all say LeCroy. Congrats to Carl Smith for nailing it, email me for your prize and happy Thanksgiving!

by bunnie at November 28, 2015 12:22 AM

November 19, 2015

Geoffrey L. Barrows - DIY Drones

360 degree stereo vision and obstacle avoidance on a Crazyflie nano quadrotor

(More info and full post here)

I've been experimenting with putting 360 degree vision, including stereo vision, onto a Crazyflie nano quadrotor to assist with flight in near-Earth and indoor environments. Four stereo boards, each holding two image sensor chips and lenses, together see in all directions except up and down. We developed the image sensor chips and lenses in-house for this work, since there is nothing available elsewhere that is suitable for platforms of this size. The control processor (on the square PCB in the middle) uses optical flow for position control and stereo vision for obstacle avoidance. The system uses a "supervised autonomy" control scheme in which the operator gives high level commands via control sticks (e.g. "move this general direction") and the control system implements the maneuver while avoiding nearby obstacles. All sensing and processing is performed on board. The Crazyflie itself was unmodified other than a few lines of code in it's firmware to get the target Euler angles and throttle from the vision system.

Below is a video from a few flights in an indoor space. This is best viewed on a laptop or desktop computer to see the annotations in the video. The performance is not perfect, but much better than the pure "hover in place" systems I had flown in the past since obstacles are now avoided. I would not have been able to fly in the last room without the vision system to assist me! There are still obvious shortcomings- for example the stereo vision currently does not respond to blank walls- but we'll address this soon...

by Geoffrey L. Barrows at November 19, 2015 11:28 PM

November 15, 2015

Harald Welte

GSM test network at 32C3, after all

Contrary to my blog post yesterday, it looks like we will have a private GSM network at the CCC congress again, after all.

It appears that Vodafone Germany (who was awarded the former DECT guard band in the 2015 spectrum auctions) is not yet using it in December, and they agreed that we can use it at the 32C3.

With this approval from Vodafone Germany we can now go to the regulator (BNetzA) and obtain the usual test license. Given that we used to get the license in the past, and that Vodafone has agreed, this should be a mere formality.

For the German language readers who appreciate the language of the administration, it will be a Frequenzzuteilung für Versuchszwecke im nichtöffentlichen mobilen Landfunk.

So thanks to Vodafone Germany, who enabled us at least this time to run a network again. By end of 2016 you can be sure they will have put their new spectrum to use, so I'm not that optimistic that this would be possible again.

by Harald Welte at November 15, 2015 11:00 PM

November 14, 2015

Harald Welte

No GSM test network at 32C3

I currently don't assume that there will be a GSM network at the 32C3.

Ever since OpenBSC was created in 2008, the annual CCC congress was a great opportunity to test OpenBSC and related software with thousands of willing participants. In order to do so, we obtained a test licence from the German regulatory authority. This was never any problem, as there was a chunk of spectrum in the 1800 MHz GSM band that was not allocated to any commercial operator, the so-called DECT guard band. It's called that way as it was kept free in order to ensure there is no interference between 1800 MHz GSM and the neighboring DECT cordless telephones.

Over the decades, it was determined on a EU level that this guard band might not be necessary, or at least not if certain considerations are taken for BTSs deployed in that band.

When the German regulatory authority re-auctioned the GSM spectrum earlier this year, they decided to also auction the frequencies of the former DECT guard band. The DECT guard band was awarded to Vodafone.

This is a pity, as this means that people involved with cellular research or development of cellular technology now have it significantly harder to actually test their systems.

In some other EU member states it is easier, like in the Netherlands or the UK, where the DECT guard band was not treated like any other chunk of the GSM bands, but put under special rules. Not so in Germany.

To make a long story short: Without the explicit permission of any of the commercial mobile operators, it is not possible to run a test/experimental network like we used to ran at the annual CCC congress.

Given that

  • the event is held in the city center (where frequencies are typically used and re-used quite densely), and
  • an operator has nothing to gain from permitting us to test our open source GSM/GPRS implementations,

I think there is little chance that this will become a reality.

If anyone has really good contacts to the radio network planning team of a German mobile operator and wants to prove me wrong: Feel free to contact me by e-mail.

Thanks to everyone involved with the GSM team at the CCC events, particularly Holger Freyther, Daniel Willmann, Stefan Schmidt, Jan Luebbe, Peter Stuge, Sylvain Munaut, Kevin Redon, Andreas Eversberg, Ulli (and everyone else whom I may have forgot, my apologies). It's been a pleasure!

Thanks also to our friends at the POC (Phone Operation Center) who have provided interfacing to the DECT, ISDN, analog and VoIP network at the events. Thanks to roh for helping with our special patch requests. Thanks also to those entities and people who borrowed equipment (like BTSs) in the pre-sysmocom years.

So long, and thanks for all the fish!

by Harald Welte at November 14, 2015 11:00 PM

November 12, 2015

Elphel

NC393 progress update: 14MPix Sensor Front End is up and running

10398 Sensor Front End with 14MPix MT9F002

10398 Sensor Front End with 14MPix MT9F002

Sensors (ON Semiconductor MT9F002) and blank PCBs arrived in time and so I was able to hand-assemble two 10398 boards and start testing them. I had some minor problems getting data output from the first board, but it turned out to be just my bad soldering of the sensor, the second board worked immediately. To my surprise I did not have any problems with HiSPi decoder that I simulated using the sensor model I wrote myself from the documentation, so the color bar test pattern appeared almost immediately, followed by the real acquired images. I kept most of the sensor settings unmodified from the default values, just selected the correct PLL multiplier, output signal levels (1.8V HiVCM – compatible with the FPGA) and packetized format, the only other registers I had to adjust manually were exposure and color analog gains.

As it was reasonable to expect, sensitivity of the 14MPix sensor is lower than that of the 5MPix MT9P006 – our initial estimate is that it is 4 times lower, but this needs more careful measurements to find out exposure required for pixel saturation with the same illumination. Analog channel gains for both sensors we set slightly higher than minimal ones for the saturation, but such rough measurements could easily miss a factor of 1.5. MT9F002 offers more controls over the signal chain gains, but any (even analog) gain in the chain that boosts signal above the minimal needed for saturation proportionally reduces used “well capacity”, while I expect the Full Well Capacity (FWC) is already not very high for the 1.4μm x1.4 μm pixel sensor. And decrease in the number of electrons stored in a pixel accordingly increases the relative shot noise that reveals itself in the highlight areas. We will need to accurately measure FWC of the MT9F002 and have better sensitivity comparison, including that of the binned mode, but I expect to find out that 5MPix sensor are not obsolete yet and for some applications may still have advantages over the newer sensors.

Image acquired with 5 MPix MT9P006 sensor, 1/2000 s

Image acquired with 5 MPix MT9P006 sensor, 1/2000 s

Image acquired with 14MPix MT9F002 sensor, 1/500 s

Image acquired with 14MPix MT9F002 sensor, 1/500 s

Both sensors used identical f=4.5mm F3.0 lenses, the 5MPix one lens is precisely adjusted during calibration, the lens of the 14MPix sensor is just attached and focused by hand using the lens thread, no tilt correction was performed. Both images are saved at 100% JPEG quality (lossless compression) to eliminate compression artifacts, both used in-camera simple 3×3 demosaic algorithm. The 14 MPix image has visible checkerboard pattern caused by the difference of the 2 green values (green in red row, and green in the blue row). I’ll check that it is not caused by some FPGA code bug I might introduce (save as raw image and do de-bayer on a host computer), but it may also be caused by pixel cross-talk in the sensor. In any case it is possible to compensate or at least significantly reduce in the output data.

MT9F002 transmits data over 5 differential 100Ω pairs: 1 clock pair and 4 data lanes. For the initial tests I used our regular 70mm flex cable used for the parallel interface sensors, and just soldered 5 of 100Ω resistors to the contacts at the camera side end. It did work and I did not even have to do any timing adjustments of the differential lanes. We’ll do such adjustments in the future to get to the centers of the data windows – both the sensor and the FPGA code have provisions for that. The physical 100Ω load resistors were needed as it turned out that Xilinx Zynq has on-chip differential termination only for the 2.5V (or higher) supply voltages on the regular (not “high performance”) I/Os and this application uses 1.8V interface power – I missed this part of documentation and assumed that all the differential inputs have possibility to turn on differential termination. 660 Mbps/lane data rate is not too high and I expect that it will be possible to use short cables with no load resistors at all, adding such resistors to the 10393 board is not an option as it has to work with both serial and parallel sensor interfaces. Simultaneously we designed and placed an order for dedicated flex cables 150mm long, if that will work out we’ll try longer (450mm) controlled impedance cables.

by andrey at November 12, 2015 08:43 PM

November 10, 2015

ZeptoBARS

Infineon BFR740 - 42GHz BJT : weekend die-shot

Infineon BFR740L3RH - bipolar SiGe RF transistor with transition frequency of 42Ghz in a very small leadless package (TSLP-3-9 - 0.6×1×0.31mm).
Die size 305x265 µm.



After metal etch we can see that it's not that simple:


Main active area (scale 1px = 57nm):



November 10, 2015 05:18 AM

November 07, 2015

Harald Welte

Progress on the Linux kernel GTP code

It is always sad if you start to develop some project and then never get around finishing it, as there are too many things to take care in parallel. But then, days only have 24 hours...

Back in 2012 I started to write some generic Linux kernel GTP tunneling code. GTP is the GPRS Tunneling Protocol, a protocol between core network elements in GPRS networks, later extended to be used in UMTS and even LTE networks.

GTP is split in a control plane for management and the user plane carrying the actual user IP traffic of a mobile subscriber. So if you're reading this blog via a cellular interent connection, your data is carried in GTP-U within the cellular core network.

To me as a former Linux kernel networking developer, the user plane of GTP (GTP-U) had always belonged into kernel space. It is a tunneling protocol not too different from many other tunneling protocols that already exist (GRE, IPIP, L2TP, PPP, ...) and for the user plane, all it does is basically add a header in one direction and remove the header in the other direction. User data, particularly in networks with many subscribers and/or high bandwidth use.

Also, unlike many other telecom / cellular protocols, GTP is an IP-only protocol with no E1, Frame Relay or ATM legacy. It also has nothing to do with SS7, nor does it use ASN.1 syntax and/or some exotic encoding rules. In summary, it is nothing like any other GSM/3GPP protocol, and looks much more of what you're used from the IETF/Internet world.

Unfortunately I didn't get very far with my code back in 2012, but luckily Pablo Neira (one of my colleagues from netfilter/iptables days) picked it up and brought it along. However, for some time it has been stalled until recently it was thankfully picked up by Andreas Schultz and now receives some attention and discussion, with the clear intention to finish + submit it for mainline inclusion.

The code is now kept in a git repository at http://git.osmocom.org/osmo-gtp-kernel/

Thanks to Pablo and Andreas for picking this up, let's hope this is the last coding sprint before it goes mainline and gets actually used in production.

by Harald Welte at November 07, 2015 11:00 PM

Osmocom Berlin meetings

Back in 2012, I started the idea of having a regular, bi-weekly meeting of people interested in mobile communications technology, not only strictly related to the Osmocom projects and software. This was initially called the Osmocom User Group Berlin. The meetings were held twice per month in the rooms of the Chaos Computer Club Berlin.

There are plenty of people that were or still are involved with Osmocom one way or another in Berlin. Think of zecke, alphaone, 2b-as, kevin, nion, max, prom, dexter, myself - just to name a few.

Over the years, I got "too busy" and was no longer able to attend regularly. Some people kept it alive (thanks to dexter!), but eventually they were discontinued in 2013.

Starting in October 2015, I started a revival of the meetings, two have been held already, the third is coming up next week on November 11.

I'm happy that I had the idea of re-starting the meeting. It's good to meet old friends and new people alike. Both times there actually were some new faces around, most of which even had a classic professional telecom background.

In order to emphasize the focus is strictly not on Osmocom alone ( particularly not about its users only), I decided to rename the event to the Osmocom Meeting Berlin.

If you're in Berlin and are interested in mobile communications technology on the protocol and radio side of things, feel free to join us next Wednesday.

by Harald Welte at November 07, 2015 11:00 PM

November 04, 2015

Elphel

NC393 progress update: one gigapixel per second (12x faster than NC353)

All the PCBs for the new camera: 10393, 10389 and 10385 are modified to rev “A”, we already received the new boards from the factory and now are waiting for the first production batch to be build. The PCB changes are minor, just moving connectors away from the board edge to simplify mechanical design and improve thermal contact of the heat sink plate to the camera body. Additionally the 10389A got m2 connector instead of the mSATA to accommodate modern SSD.

While waiting for the production we designed a new sensor board (10398) that has exactly the same dimensions, same image sensor format as the current 10338E and so it is compatible with the hardware for the calibrated sensor front ends we use in photogrammetric cameras. The difference is that this MT9F002 is a 14 MPix device and has high-speed serial interface instead of the legacy parallel one. We expect to get the new boards and the sensors next week and will immediately start working with this new hardware.

In preparation for the faster sensors I started to work on the FPGA code to make it ready for the new devices. We planned to use modern sensors with the serial interfaces from the very beginning of the new camera design, so the hardware accommodates up to 8 differential data lanes plus a clock pair in addition to the I²C and several control signals. One obviously required part is the support for Aptina HiSPi (High Speed Serial Pixel) interface that in case of MT9F002 uses 4 differential data lanes, each running at 660 Mbps – in 12-bit mode that corresponds to 220 MPix/s. Until we’ll get the actual sensors I could only simulate receiving of the HiSPi data using the sensor model written ourselves following the interface documentation. I’ll need yet to make sure I understood the documentation correctly and the sensor will produce output similar to what we modeled.

The sensor interface is not the only piece of the code that needed changes, I also had to increase significantly the bandwidth of the FPGA signal processing and to modify the I²C sequencer to support 2-byte register addresses.

Data that FPGA receives from the sensor passes through the several clock domains until it is stored in the system memory as a sequence of compressed JPEG/JP4 frames:

  • Sensor data in each channel enters FPGA at a pixel clock rate, and subsequently passes through vignetting correction/scaling module, gamma conversion module and histogram calculation modules. This chain output is buffered before crossing to the memory clock domain.
  • Multichannel DDR3 memory controller records sensor data in line-scan order and later retrieves it in overlapping (for JPEG) or non-overlapping (for JP4) square tiles.
  • Data tiles retrieved from the external DDR3 memory are sent to the compressor clock domain to be processed with JPEG algorithm. In color JPEG mode compressor bandwidth has to be 1.5 higher than the pixel rate, as for 4:2:0 encoding each 16×16 pixels macroblock generate 6 of the 8×8 image blocks – 4 for Y (intensity) and 2 – for color components. In JP4 mode when the de-mosaic algorithm runs on the host computer the compressor clock rate equals the pixel rate.
  • Last clock domain is 150MHz used by the AXI interface that operates in 64-bit parallel mode and transfers the compressed data to the system memory.

Two of these domains used double clock rate for some of the processing stages – histograms calculation in the pixel clock domain and Huffman encoder/bit stuffer in the compressor. In the previous NC353 camera pixel clock rate was 96MHz (192 MHz for double rate) and compressor rate was 80MHz (160MHz for double rate). The sensor/compressor clock rates difference reflects the fact that the sensor data output is not uniform (it pauses during inactive lines) and the compressor can process the frame at a steady rate.

MT9F002 image sensor has the output pixel rate of 220MPix/s with the average (over the full frame) rate of 198MPix/s. Using double rate clocks (440MHz for the sensor channel and 400MHz for the compressor) would be rather difficult on Zynq, so I needed first to eliminate such clocks in the design. It was possible to implement and test this modification with the existing sensor, and now it is done – four of the camera compressors each run at 250 MHz (even on “-1″, or “slow” speed grade silicon) making it total of 1GPix/sec. It does not need to have 4 separate sensors running simultaneously – a single high speed imager can provide data for all 4 compressors, each processing every 4-th frame as each image is processed independently.

At this time the memory controller will be a bottleneck when running all four MT9F002 sensors simultaneously as it currently provides only 1600MB/s bandwidth that may be marginally sufficient for four MT9F002 sensor channels and 4 compressor channels each requiring 200MB/s (bandwidth overhead is just a few percent). I am sure it will be possible to optimize the memory controller code to run at higher rate to match the compressors. We already identified which parts of the memory controller need to be modified to support 1.5x clock increase to the total of 2400MB/s. And as the production NC393 camera will have higher speed grade SoC there will be an extra 20% performance increase for the same code. That will provide bandwidth sufficient not just to run 4 sensors at full speed and compress the output data, but to do some other image manipulation at the same time.

Compared to the previous Elphel NC353 camera the new NC393 prototype already is tested to have 12x higher compressor bandwidth (4 channels instead of one and 250MPix/s instead of 80MPix/s), we plan to have the actual sensor with a full data processing chain results soon.

by andrey at November 04, 2015 06:41 AM

November 03, 2015

Free Electrons

Linux 4.3 released, Free Electrons contributions inside

Adelie PenguinThe 4.3 kernel release has been released just a few days ago. For details about the big new features in this release, we as usual recommend to read LWN.net articles covering the merge window: part 1, part 2 and part 3.

According to the KPS statistics, there were 12128 commits in this release, and with 110 patches, Free Electrons is the 20th contributing company. As usual, we did some contributions to this release, though a somewhat smaller number than for previous releases.

Our main contributions this time around:

  • On the support for Atmel ARM SoCs
    • Alexandre Belloni contributed a fairly significant number of cleanups: description of the slow clock in the Device Tree, removal of left-over from platform-data usage in device drivers (no longer needed now that all Atmel ARM platforms use the Device Tree).
    • Boris Brezillon contributed numerous improvements to the atmel-hlcdc, which is the DRM/KMS driver for the modern Atmel ARM SoCs. He added support for several SoCs to the driver (SAMA5D2, SAMA5D4, SAM9x5 and SAM9n12), added PRIME support, and support for the RGB565 and RGB444 output configurations.
    • Maxime Ripard improved the dmaengine drivers for Atmel ARM SoCs (at_hdmac and at_xdmac) to add memset and scatter-gather memset capabilities.
  • On the support for Allwinner ARM SoCs
    • Maxime Ripard converted the SID driver to the newly introduced nvmem framework. Maxime also did some minor pin-muxing and clock related updates.
    • Boris Brezillon fixed some issues in the NAND controller driver.
  • On the support for Marvell EBU ARM SoCs
    • Thomas Petazzoni added the initial support for suspend to RAM on Armada 38x platforms. The support is not fully enabled yet due to remaining stability issues, but most of the code is in place. Thomas also did some minor updates/fixes to the XOR and crypto drivers.
    • Grégory Clement added the initial support for standby, a mode that allows to forcefully put the CPUs in deep-idle mode. For now, it is not different from what cpuidle provides, but in the future, we will progressively enable this mode to shutdown PHY and SERDES lanes to save more power.
  • On the RTC subsystem, Alexandre Belloni did numerous fixes and cleanups to the rx8025 driver, and also a few to the at91sam9 and at91rm9200 drivers.
  • On the common clock framework, Boris Brezillon contributed a change to the ->determinate_rate() operation to fix overflow issues.
  • On the PWM subsystem, Boris Brezillon contributed a number of small improvements/cleanups to the subsystem and some drivers: addition of a pwm_is_enabled() helper, migrate drivers to use the existing helper functions when possible, etc.

The detailed list of our contributions is:

by Thomas Petazzoni at November 03, 2015 03:11 PM

November 02, 2015

Harald Welte

Germany's excessive additional requirements for VAT-free intra-EU shipments

Background

At my company sysmocom we are operating a small web-shop providing small tools and accessories for people interested in mobile research. This includes programmable SIM cards, SIM card protocol tracers, adapter cables, duplexers for cellular systems, GPS disciplined clock units, and other things we consider useful to people in and around the various Osmocom projects.

We of course ship domestic, inside the EU and world-wide. And that's where the trouble starts, at least since 2014.

What are VAT-free intra-EU shipments?

As many readers of this blog (at least the European ones) know, inside the EU there is a system by which intra-EU sales between businesses in EU member countries are performed without charging VAT.

This is the result of different countries having different amount of VAT, and the fact that a business can always deduct the VAT it spends on its purchases from the VAT it has to charge on its sales. In order to avoid having to file VAT return statements in each of the countries of your suppliers, the suppliers simply ship their goods without charging VAT in the first place.

In order to have checks and balances, both the supplier and the recipient have to file declarations to their tax authorities, indicating the sales volume and the EU VAT ID of the respective business partners.

So far so good. This concept was reasonably simple to implement and it makes the life easier for all involved businesses, so everyone participates in this scheme.

Of course there always have been some obstacles, particularly here in Germany. For example, you are legally required to confirm the EU-VAT-ID of the buyer before issuing a VAT-free invoice. This confirmation request can be done online

However, the Germany tax authorities invented something unbelievable: A Web-API for confirmation of EU-VAT-IDs that has opening hours. Despite this having rightfully been at the center of ridicule by the German internet community for many years, it still remains in place. So there are certain times of the day where you cannot verify EU-VAT-IDs, and thus cannot sell products VAT-free ;)

But even with that one has gotten used to live.

Gelangensbescheinigung

Now in recent years (since January 1st, 2014) , the German authorities came up with the concept of the Gelangensbescheinigung. To the German reader, this newly invented word already sounds ugly enough. Literal translation is difficult, as it sounds really clumsy. Think of something like a reaching-its-destination-certificate

So now it is no longer sufficient to simply verify the EU-VAT-ID of the buyer, issue the invoice and ship the goods, but you also have to produce such a Gelangensbescheinigung for each and every VAT-free intra-EU shipment. This document needs to include

  • the name and address of the recipient
  • the quantity and designation of the goods sold
  • the place and month when the goods were received
  • the date of when the document was signed
  • the signature of the recipient (not required in case of an e-mail where the e-mail headers show that the messages was transmitted from a server under control of the recipient)

How can you produce such a statement? Well, in the ideal / legal / formal case, you provide a form to your buyer, which he then signs and certifies that he has received the goods in the destination country.

First of all, I find if offensive that I have to ask my customers to make such declarations in the first place. And then even if I accept this and go ahead with it, it is my legal responsibility to ensure that he actually fills this in.

What if the customer doesn't want to fill it in or forgets about it?

Then I as the seller am liable to pay 19% VAT on the purchase he made, despite me never having charged those 19%.

So not only do I have to generate such forms and send them with my goods, but I also need a business process of checking for their return, reminding the customers that their form has not yet been returned, and in the end they can simply not return it and I loose money. Great.

Track+Trace / Courier Services

Now there are some alternate ways in which a Gelangensbescheinigung can be generated. For example by a track+trace protocol of the delivery company. However, the requirements to this track+trace protocol are so high, that at least when I checked in late 2013, the track and trace protocol of UPS did not fulfill the requirements. For example, a track+trace protocol usually doesn't show the quantity and designation of goods. Why would it? UPS just moves a package from A to B, and there is no customs involved that would require to know what's in the package.

Postal Packages

Now let's say you'd like to send your goods by postal service. For low-priced non-urgent goods, that's actually what you generally want to do, as everything else is simply way too expensive compared to the value of the goods.

However, this is only permitted, if the postal service you use produces you with a receipt of having accepted your package, containing the following mandatory information:

  • name and address of the entity issuing the receipt
  • name and address of the sender
  • name and address of the recipient
  • quantity and type of goods
  • date of having receive the goods

Now I don't know how this works in other countries, but in Germany you will not be able to get such a receipt form the postal office.

In fact I inquired several times with the legal department of Deutsche Post, up to the point of sending a registered letter (by Deutsche Post) to Deutsche Post. They have never responded to any of those letters!

So we have the German tax authorities claiming yes, of course you can still do intra-EU shipments to other countries by postal services, you just need to provide a receipt, but then at the same time they ask for a receipt indicating details that no postal receipt would ever show.

Particularly a postal receipt would never confirm what kind of goods you are sending. How would the postal service know? You hand them a package, and they transfer it. It is - rightfully - none of their business what its content may be. So how can you ask them to confirm that certain goods were received for transport ?!?

Summary

So in summary:

Since January 1st, 2014, we now have German tax regulations in force that make VAT free intra-EU shipments extremely difficult to impossible

  • The type of receipt they require from postal services is not provided by Deutsche Post, thereby making it impossible to use Deutsche Post for VAT free intra-EU shipments
  • The type of track+trace protocol issued by UPS does not fulfill the requirements, making it impossible to use them for VAT-free intra-EU shipments
  • The only other option is to get an actual receipt from the customer. If that customer doesn't want to provide this, the German seller is liable to pay the 19% German VAT, despite never having charged that to his customer

Conclusion

To me, the conclusion of all of this can only be one:

German tax authorities do not want German sellers to sell VAT-free goods to businesses in other EU countries. They are actively trying to undermine the VAT principles of the EU. And nobody seem to complain about it or even realize there is a problem.

What a brave new world we live in.

by Harald Welte at November 02, 2015 11:00 PM

October 31, 2015

Harald Welte

small tools: rtl8168-eeprom

Some time ago I wrote a small Linux command line utility that can be used to (re)program the Ethernet (MAC) address stored in the EEPROM attached to an RTL8168 Ethernet chip.

This is for example useful if you are a system integrator that has its own IEEE OUI range and you would like to put your own MAC address in devices that contain the said Realtek etherent chips (already pre-programmed with some other MAC address).

The source code can be obtaned from: http://git.sysmocom.de/rtl8168-eeprom/

by Harald Welte at October 31, 2015 11:00 PM

small tools: gpsdate

In 2013 I wrote a small Linux program that can be usded to set the system clock based on the clock received from a GPS receiver (via gpsd), particularly when a system is first booted. It is similar in purpose to ntpdate, but of course obtains time not from ntp but from the GPS receiver.

This is particularly useful for RTC-less systems without network connectivity, which come up with a completely wrong system clock that needs to be properly set as soon as th GPS receiver finally has acquired a signal.

I asked the ntp hackers if they were interested in merging it into the official code base, and their response was (summarized) that with a then-future release of ntpd this would no longer be needed. So the gpsdate program remains an external utility.

So in case anyone else might find the tool interesting: The source code can be obtained from http://git.sysmocom.de/gpsdate/

by Harald Welte at October 31, 2015 11:00 PM

October 29, 2015

Harald Welte

Deutsche Bank / unstable interfaces

Deutsche Bank is a large, international bank. They offer services world-wide and are undoubtedly proud of their massive corporate IT department.

Yet, at the same time, they fail to get the most fundamental principles of user/customer-visible interfaces wrong: Don't change them. If you need to change them, manage the change carefully.

In many software projects, keeping the API or other interface stable is paramount. Think of the Linux kernel, where breaking a userspace-visible interface is not permitted. The reasons are simple: If you break that interface, _everyone_ using that interface will need to change their implementation, and will have to synchronize that with the change on the other side of the interface.

The internet online banking system of Deutsche Bank in Germany permits the upload of transactions by their customers in a CSV file format.

And guess what? They change the file format from one day to the other.

  • without informing their users in advance, giving them time to adopt their implementations of that interface
  • without documenting the exact nature of the change
  • adding new fields to the CSV in the middle of the line, rather than at the end of the line, to make sure things break even more

Now if you're running a business and depend on automatizing your payments using the interface provided by Deutsche Bank, this means that you fail to pay your suppliers in time, you hastily drop/delay other (paid!) work that you have to do in order to try to figure out what exactly Deutsche Bank decided to change completely unannounced, from one day to the other.

If at all, I would have expected this from a hobbyist kind of project. But seriously, from one of the worlds' leading banks? An interface that is probably used by thousands and thousands of users? WTF?!?

by Harald Welte at October 29, 2015 11:00 PM

October 28, 2015

Harald Welte

The VMware GPL case

My absence from blogging meant that I didn't really publicly comment on the continued GPL violations by VMware, and the 2015 legal case that well-known kernel developer Christoph Hellwig has brought forward against VMware.

The most recent update by the Software Freedom Conservancy on the VMware GPL case can be found at https://sfconservancy.org/news/2015/oct/28/vmware-update/

In case anyone ever doubted: I of course join the ranks of the long list of Linux developers and other stakeholders that consider VMware's behavior completely unacceptable, if not outrageous.

For many years they have been linking modified Linux kernel device drivers and entire kernel subsystems into their proprietary vmkernel software (part of ESXi). As an excuse, they have added a thin shim layer under GPLv2 which they call vmklinux. And to make all of this work, they had to add lots of vmklinux specific API to the proprietary vmkernel. All the code runs as one program, in one address space, in the same thread of execution. So basically, it is at the level of the closest possible form of integration between two pieces of code: Function calls within the same thread/process.

In order to make all this work, they had to modify their vmkernel, implement vmklinux and also heavily modify the code they took from Linux in the first place. So the drivers are not usable with mainline linux anymore, and vmklinux is not usable without vmkernel either.

If all the above is not a clear indication that multiple pieces of code form one work/program (and subsequently must be licensed under GNU GPLv2), what should ever be considered that?

To me, it is probably one of the strongest cases one can find about the question of derivative works and the GPL(v2). Of course, all my ramblings have no significance in a court, and the judge may rule based on reports of questionable technical experts. But I'm convinced if the court was well-informed and understood the actual situation here, it would have to rule in favor of Christoph Hellwig and the GPL.

What I really don't get is why VMware puts up the strongest possible defense one can imagine. Not only did they not back down in lengthy out-of-court negotiations with the Software Freedom Conservancy, but also do they defend themselves strongly against the claims in court.

In my many years of doing GPL enforcement, I've rarely seen such a dedication and strong opposition. This shows the true nature of VMware as a malicious, unfair entity that gives a damn sh*t about other peoples' copyright, the Free Software community and its code of conduct as a whole, and the Linux kernel developers in particular.

So let's hope they waste a lot of money in their legal defense, get a sufficient amount of negative PR out of this to the point of tainting their image, and finally obtain a ruling upholding the GPL.

All the best to Christoph and the Conservancy in fighting this fight. For those readers that want to help their cause, I believe they are looking for more supporter donations.

by Harald Welte at October 28, 2015 11:00 PM

Andrew Zonenberg, Silicon Exposed

New GPG key

Hi everyone,

I've been busy lately and haven't had a chance to post much. There will be a pretty good sized series coming up in a month or two (hopefully) on my next-gen FPGA cluster and JTAG stuff but I'm holding off until I have something better to write about.

In the meantime, I've decided that my circa 2009 GPG key is long overdue for replacement so I've issued a new one and am posting the fingerprints in multiple public locations (this being one).

The new key fingerprint is:
859B A7BA DE9C 0BD5 EC01  FF36 3461 7AB9 B31C 7D7C

Verification message signed with my old key:
http://thanatos.virtual.antikernel.net/unlisted/new-key-notes.txt.asc

by Andrew Zonenberg (noreply@blogger.com) at October 28, 2015 01:37 AM

October 27, 2015

Harald Welte

What I've been busy with

Those who don't know me personally and/or stay in touch more closely might be wondering what on earth happened to Harald in the last >= 1 year?

The answer would be long, but I can summarize it to I disappeared into sysmocom. You know, the company that Holger and I founded four years ago, in order to commercially support OpenBSC and related projects, and to build products around it.

In recent years, the team has been growing to the point where in 2015 we had suddenly 9 employees and a handful of freelancers working for us.

But then, that's still a small company, and based on the projects we're involved, that team has to cover a variety of topics (next to the actual GSM/GPRS related work), including

  • mechanical engineering (enclosure design)
  • all types of electrical engineering
    • AC/electrical wiring/fusing on DIN rails
    • AC/DC and isolated DC/DC power supplies (based on modules)
    • digital design
    • analog design
    • RF design
  • prototype manufacturing and testing
  • software development
    • bare-iron bootloader/os/application on Cortex-M0
    • NuttX on Cortex-M3
    • OpenAT applications on Sierra Wireless
    • custom flavors of Linux on several different ARM architectures (TI DaVinci, TI Sitara)
    • drivers for various peripherals including Ethernet Switches, PoE PSE controller
    • lots of system-level software for management, maintenance, control

I've been involved in literally all of those topics, with most of my time spent on the electronics side than on the software side. And if software, the more on the bootloader/RTOS side, than on applications.

So what did we actually build? It's unfortunately still not possible to disclose fully at this point, but it was all related to marine communications technology. GSM being one part of it, but only one of many in the overall picture.

Given the quite challenging breadth/width of the tasks at hand and problem to solve, I'm actually surprised how much we could achieve with such a small team in a limited amount of time. But then, there's virtually no time left, which meant no gpl-violations.org work, no blogging, no progress on the various Osmocom Erlang projects for core network protocols, and last but not least no Taiwan holidays this year.

ately I see light at the end of the tunnel, and there is again a bit ore time to get back to old habits, and thus I

  • resurrected this blog from the dead
  • resurrected various project homepages that have disappeared
  • started some more work on actual telecom stuff (osmo-iuh, for example)
  • restarted the Osmocom Berlin Meeting

by Harald Welte at October 27, 2015 11:00 PM

Bunnie Studios

Name that Ware October 2015

The Ware for October 2015 is shown below.

…and one of the things that plugs into the slots visible in the photo above as an extra hint…

Thanks again to Nava Whiteford for sharing this ware. Visit his blog and help him get permission from his wife to buy a SEM!

by bunnie at October 27, 2015 07:54 AM

Winner, Name that Ware September 2015

The Ware for September 2015 is a Powerex CM600HA-24H, which met its demise serving as a driver for a tesla coil in the Orage sculpture (good guess 0xbadf00d!). I have a thing for big transistors, and I was very pleased to be gifted this even though it was busted. At $300 a piece, it’s not something I just get up and buy because I want to wear it around as a piece of jewelry; but it did make for a great, if not heavy, necklace. And it was interesting to take apart to see what was inside!

As for the winner, Jimmyjo was the first to guess exactly the model of the IGBT. Congrats, email me for your prize!

by bunnie at October 27, 2015 07:53 AM

October 26, 2015

Harald Welte

Weblog + homepage online again

On October 31st, 2014, I had reeboote my main server for a kernel upgrade, and could not mount the LUKS crypto volume ever again. While the techincal cause for this remains a mystery until today (it has spawned some conspiracy theories), I finally took some time to recover some bits and pieces from elsewhere. I didn't want this situation to drag on for more than a year...

Rather than bringing online the old content using sub-optimal and clumsy tools to generate static content (web sites generated by docbook-xml, blog by blosxom), I decided to give it a fresh start and try nikola, a more modern and actively maintained tool to generate static web pages and blogs.

The blog is now available at http://laforge.gnumonks.org/blog/ (a redirect from the old /weblog is in place, for those who keep broken links for more than 12 months). The RSS feed URLs are different from before, but there are again per-category feeds so people (and planets) can subscribe to the respective category they're interested in.

And yes, I do plan to blog again more regularly, to make this place not just an archive of a decade of blogging, but a place that is alive and thrives with new content.

My personal web site is available at http://laforge.gnumonks.org/ while my (similarly re-vamped) freelancing business web site is also available again at http://hmw-consulting.de/.

I still need to decide what to do about the old http://gnumonks.org/ site. It still has its old manual web 1.0 structure from the late 1990ies.

I've also re-surrected http://openezx.org/ and http://ftp.gpl-devices.org/ as well as http://ftp.gnumonks.org/ (old content). Next in line is gpl-violations.org, which I also intend to convert to nikola for maintenance reasons.

by Harald Welte at October 26, 2015 11:00 PM

ZeptoBARS

CHANGJIANG MMBT2222A - npn BJT transistor : weekend die-shot

Unlike OnSemi MMBT2222A CHANGJIANG MMBT2222A has both smaller die size and simpler layout (BC847-like) - which should cause significantly lower hFE on high collector currents.

Die size 234x234 µm.


October 26, 2015 07:26 AM

October 19, 2015

ZeptoBARS

Linear LT1021-5 ±0.05% precision reference : weekend die-shot

Expected heavy duty digital correction? Nope. Just 15 fuses and buried Zener - truly a work of art.
Die size 2354x1364 µm.


October 19, 2015 08:05 AM

October 11, 2015

ZeptoBARS

ST UA741 - the opamp : weekend die-shot

µA741 was the first "usable", widespread solid state opamp, mainly due to integrated capacitor for frequency correction (which we now take for granted in general-purpose opamps). This chip was reimplemented numerous times since 1968, like this ST UA741 in 2001. You can also take a look at historic schematic of µA741 here.

Die size 1073x993 µm.


October 11, 2015 05:32 PM

October 06, 2015

Video Circuits

Experiments using the Rutt-Etra Analog Video Synthesizer and Siegel colorizer, 1975

Video Synthesis Experiments, excerpts from  Edin Velez on vimeo.

A rare example of the Siegel Colorizer in use in this short excerpt.
http://edinvelez.com

by Chris (noreply@blogger.com) at October 06, 2015 12:26 PM

September 29, 2015

Elphel

Google is testing AI to respond to privacy requests

Robotic customer support fails while pretending to be an outsourced human. Last week I searched with Google for Elphel and I got a wrong spelled name, wrong address and a wrong phone number.

Google search for Elphel

Google search for Elphel

A week ago I tried Google Search for our company (usually I only check recent results using last week or last 3 days search) and noticed that on the first result page there is a Street View of my private residence, my home address pointing to a business with the name “El Phel, Inc”.

Yes, when we first registered Elphel in 2001 we used our home address, and even the first $30K check from Google for development of the Google Books camera came to this address, but it was never “El Phel, Inc.” Later wire transfers with payments to us for Google Books cameras as well as Street View ones were coming to a different address – 1405 W. 2200 S., Suite 205, West Valley City, Utah 84119. In 2012 we moved to the new building at 1455 W. 2200 S. as the old place was not big enough for the panoramic camera calibration.

I was not happy to see my house showing as the top result when searching for Elphel, it is both breach of my family privacy and it is making harm to Elphel business. Personally I would not consider a 14-year old company with international customer base a serious one if it is just a one-man home-based business. Sure you can get the similar Street View results for Google itself but it would not come out when you search for “Google”. Neither it would return wrongly spelled business name like “Goo & Gel, Inc.” and a phone number that belongs to a Baptist church in Lehi, Utah (update: they changed the phone number to the one of Elphel).

Google original location

Google original location

Honestly there was some of our fault too, I’ve seen “El Phel” in a local Yellow Pages, but as we do not have a local business I did not pay attention to that – Google was always good at providing relevant information in the search results, extracting actual contact information from the company “Contacts” page directly.

Noticing that Google had lost its edge in providing search results (Bing and Yahoo show relevant data), I first contacted Yellow Pages and asked them to correct information as there is no “El Phel, Inc.” at my home address and that I’m not selling any X-Ray equipment there. They did it very promptly and the probable source of the Google misinformation (“probable” as Google does not provide any links to the source) was gone for good.

I waited for 24 hours hoping that Google will correct the information automatically (post on Elphel blog appears in Google search results in 10 – 19 seconds after I press “Publish” button). Nothing happened – same “El Phel, Inc.” in our house.

So I tried to contact Google. As Google did not provide source of the search result, I tried to follow recommendations to correct information on the map. And the first step was to log in with Google account, since I could not find a way how to contact Google without such account. Yes, I do have one – I used Gmail when Google was our customer, and when I later switched to other provider (I prefer to use only one service per company, and I selected to use Google Search) I did not delete the Gmail account. I found my password and was able to log in.

First I tried to select “Place doesn’t exist” (There is no such company as “El Phel, Inc.” with invalid phone number, and there is no business at my home address).

Auto confirmation came immediately:
From: Google Maps <noreply-maps-issues@google.com>
Date: Wed, Sep 23, 2015 at 9:55 AM
Subject: Thanks for the edit to El Phel Inc
To: еlphеl@gmаil.cоm
Maps
Thank you
Your edit is being reviewed. Thanks for sharing your knowledge of El Phel Inc.
El Phel Inc
3200 Elmer St, Magna, UT, United States
Your edit
Place doesn't exist
Edited on Sep 23, 2015 · In review
Keep exploring,
The Google Maps team
© 2015 Google Inc. 1600 Amphitheatre Parkway, Mountain View, CA 94043
You've received this confirmation email to update you about your editing activities on Google Maps.

But nothing happened. Two days later I tried with different option (there was no place to provide text entry)
Your edit
Place is private

No results either.

Then I tried to follow the other link after the inappropriate search result – “Are you the business owner?” (I’m not at owner of the non-existing business, but I am an owner of my house). And yes, I had to use my Gmail account again. There were several options how I prefer to be contacted – I selected “by phone”, and shortly after a female-voiced robot called. I do not have a habit of talking to robots, so I did not listen what it said waiting for keywords like: “press 0 to talk to a representative” or “Please stay on the line…”, but it never said anything like this and immediately hang up.

Second time I selected email contact, but it seems to me that the email conversation was with some kind of Google Eliza. This was the first email:

From : local-help@google.com
To : andrey@elphel.com
Subject : RE: [7-2344000008781] Google Local Help
Date : Thu, 24 Sep 2015 22:48:47 -0700
Add Label
Hi,
Greetings from Google.
After investigating, i found that here is an existing page on Google (El Phel Inc-3200 S Elmer St Magna, UT 84044) which according to your email is incorrect information.
Apologies for the inconvenience andrey, however as i can see that you have created a page for El Phel Inc, hence i would first request you to delete the Business page if you aren't running any Business. Also you can report a problem for incorrect information on Maps,Here is an article that would provide you further clarity on how to report a problem or fix the map.
In case you have any questions feel free to reply back on the same email address and i would get back to you.
Regards,
Rohit
Google My Business Support.

This robot tried to mimic a kids language (without capitalizing “I” and the first letter of my name), and the level of understanding the matter was below that of a human (it was Google, not me who created that page, I just wanted it to be removed).

I replied as I thought it still might be a human, just tired and overwhelmed by so many privacy-related requests they receive (the email came well after hours in United States).

From : andrey <andrey@elphel.com>
To : local-help@google.com
Subject : RE: [7-2344000008781] Google Local Help
Date : Fri, 25 Sep 2015 00:16:21 -0700
Hello Rohit,
I never created such page. I just tried different ways to contact Google to remove this embarrassing link. I did click on "Are you the business owner" (I am the owner of this residence at 3200 S Elmer St Magna, UT 84044) as I hoped that when I'll get the confirmation postcard I'll be able to reply that there is no business at this residential address).
I did try link "how to report a problem or fix the map", but I could not find a relevant method to remove a search result that does not reference external page as a source, and assigns my home residence to the search results of the company, that has a different (than listed) name, is located in a different city (West Valley City, 84119, not in Magna, 84044), and has a different phone number.
So please, can you remove that incorrect information?
Andrey Filippov

Nothing happened either, then on Sunday night (local time) came another email from “Rohit”:

From : local-help@google.com
To : andrey@elphel.com
Subject : RE: [7-2344000008781] Google Local Help
Date : Sun, 27 Sep 2015 18:11:44 -0700
Hi,
Greetings from Google.
I am working on your Business pages and would let you know once get any update.
Please reply back on the same email address in case of any concerns.
Regards,
Rohit
Google My Business Support

You may notice that it had the same ticket number, so the sender had all the previous information when replying. For any human capable of using just Google Search it would be not more than 15-30 seconds to find out that their information is incorrect and either remove it completely (as I asked) or replace with some relevant one.

And there is another detail that troubles me. Looking at the time/days when the “Google My Business Support” emails came, and the name “Rohit” it may look like it came from India. While testing a non-human communications Google might hope that correspondents would more likely attribute some inconsistencies in the generated emails to the cultural differences and miss actual software flaws. Does Google count on us being somewhat racists?

Following provided links I was not able to get any response from a human representative, only two robots (phone and email) contacted me. I hope that this post will work better and help to cure this breach of my family privacy and end harm this invalid information provided by a so respected Internet search company causes to the business. I realize that robots will take over more and more of our activities (and we are helping that to happen ourselves), but maybe this process sometimes goes too fast?

by andrey at September 29, 2015 04:25 AM

September 28, 2015

Bunnie Studios

Sex, Circuits & Deep House

P9010002
Cari with the Institute Blinky Badge at Burning Man 2015. Photo credit: Nagutron.

This year for Burning Man, I built a networked light badge for my theme camp, “The Institute”. Walking in the desert at night with no light is a dangerous proposition – you can get run over by cars, bikes, or twist an ankle tripping over an errant bit of rebar sticking out of the ground. Thus, the outrageous, bordering grotesque, lighting spectacle that Burning Man becomes at night grows out of a central need for safety in the dark. While a pair of dimly flashing red LEDs should be sufficient to ensure one’s safety, anything more subtle than a Las Vegas strip billboard tends to go unnoticed by fast-moving bikers thanks to the LED arms race that has become Burning Man at night.

I wanted to make a bit of lighting that my campmates could use to stay safe – and optionally stay classy by offering a range of more subtle lighting effects. I also wanted the light patterns to be individually unique, allowing easy identification in dark, dusty nights. However, diddling with knobs and code isn’t a very social experience, and few people bring laptops to Burning Man. I wanted to come up with a way for people to craft an identity that was inherently social and interactive. In an act of shameless biomimicry, I copied nature’s most popular protocol for creating individuals – sex.

By adding a peer-to-peer radio in each badge, I was able to implement a protocol for the breeding of lighting patterns via sex.



Some examples of the unique light patterns possible through sex.

Sex

When most people think of sex, what they are actually thinking about is sexual intercourse. This is understandable, as technology allows us to have lots of sexual intercourse without actually accomplishing sexual reproduction. Still, the double-entendre of saying “Nice lights! Care to have sex?” is a playful ice breaker for new interactions between camp mates.

Sex, in this case, is used to breed the characteristics of the badge’s light pattern as defined through a virtual genome. Things like the color range, blinking rate, and saturation of the light pattern are mapped into a set of diploid (two copies of each gene) chromosomes (code) (spec). Just as in biological sex, a badge randomly picks one copy of each gene and packages them into a sperm and an egg (every badge is a hermaphrodite, much like plants). A badge’s sperm is transmitted wirelessly to another host badge, where it’s mixed with the host’s egg and a new individual blending traits of both parents is born. The new LED pattern replaces the current pattern on the egg donor’s badge.

Biological genetic traits are often analog, not digital – height or weight are not coded as discrete values in a genome. Instead, observed traits are the result of a complex blending process grounded in the minutiae of metabolic pathways and the efficacy of enzymes resulting from the DNA blueprint and environment. The manifestation of binary situations like recessive vs. dominant is often the result of a lot of gain being applied to an analog signal, thus causing the expressed trait to saturate quickly if it’s expressed at all.

In order to capture the wonderful diversity offered by sex, I implement quantitative traits in the light genome. Instead of having a single bit for each trait, it’s a byte, and there’s an expression function that combines the values from each gene (alleles) to derive a final observed trait (phenotype).

By carefully picking expression functions, I can control how the average population looks. Let’s consider saturation (I used an HSV colorspace, instead of RGB, which makes it much easier to create aesthetically pleasing color combinations). A highly saturated color is vivid and bright. A less saturated color appears pastel, until finally it’s washed out and looks just white or gray (a condition analogous to albinism).

If I want albinism to be rare, and bright colors to be common, the expression function could be a saturating add. Thus, even if one allele (copy of the gene) has a low value, the other copy just needs to be a modest value to result in a bright, vivid coloration. Albinism only occurs when both copies have a fairly low value.




Population makeup when using saturating addition to combine the maternal and paternal saturation values. Albinism – a badge light pattern looking white or gray – happens only when both maternal and paternal values are small. ‘S’ means large saturation, and ‘s’ means little saturation. ‘SS’ and ‘Ss’ pairings of genes leads to saturated colors, while only the ‘ss’ combination leads to a net low saturation (albinism).

On the other hand, if I wanted the average population to look pastel, I can simply take the average of each allele, and take that to be the saturation value. In this case, a bright color can only be achieved in both alleles have a high value. Likewise, an albino can only be achieved if both alleles have a low value.




Population makeup when using averaging to combine the maternal and paternal saturation values. The most common case is a pastel palette, with vivid colors and albinism both suppressed in the population.

For Burning Man, I chose saturating addition as the expression function, to have the population lean toward vivid colors. I implemented other features such as cyclic dimming, hue rotation, and color range using similar techniques.

It’s important when thinking about biological genes to remember that they aren’t like lines of computer code. Rather, they are like the knobs on an analog synth, and the resulting sound depends not just on the position of the knob, but where it is in the signal chain how it interacts with other effects.

Gender and Consent

Beyond genetics, there is a minefield of thorny decisions to be made when implementing the social policies and protocols around sex. What are the gender roles? And what about consent? This is where technology and society collide, making for a fascinating social experiment.

I wanted everyone to have an opportunity to play both gender roles, so I made the badges hermaphroditic, in the sense that everyone can give or receive genetic material. The “maternal” role receives sperm, combines it with an egg derived from the currently displayed light pattern, and replaces its light pattern with a new hybrid of both. The “paternal” role can transmit a sperm derived from the currently displayed pattern. Each badge has the requisite ports to play both roles, and thus everyone can play the role of male or female simply by being either the originator of or responder to a sex request.

This leads us to the question of consent. One fundamental flaw in the biological implementation of sex is the possibility of rape: operating the hardware doesn’t require mutual consent. I find the idea of rape disgusting, even if it’s virtual, so rape is disallowed in my implementation. In other words, it’s impossible for a paternal badge to force a sperm into a maternal badge: male roles are not allowed to have sex without first being asked by a female role. Instead, the person playing the female role must first initiate sex with a target mate. Conversely, female roles can’t steal sperm from male roles; sperm is only generated after explicit consent from the male. Assuming consent is given, a sperm is transmitted to the maternal badge and the protocol is complete. This two-way handshake assures mutual consent.

This non-intuitive and partially role-reversed implementation of sex lead to users asking support questions akin to “I’m trying to have sex, but why am I constantly being denied?” and my response was – well, did you ask your potential mate if it was okay to have sex first? Ah! Consent. The very important but often overlooked step before sex. It’s a socially awkward question, but with some practice it really does become more natural and easy to ask.

Some users were enthusiastic early adopters of explicit consent, while others were less comfortable with the question. It was interesting to see the ways straight men would ask other straight men for sex – they would ask for “ahem, blinky sex” – and anecdotally women seemed more comfortable and natural asking to have sex (regardless of the gender of the target user).

As an additional social experiment, I introduced a “rare” trait (pegged at ~3% of a randomly generated population) consisting of a single bright white pixel that cycles around the LED ring. I wanted to see if campmates would take note and breed for the rare trait simply because it’s rare. At the end of the week, more people were expressing the rare phenotype than at the beginning, so presumably some selective breeding for the trait did happen.

In the end, I felt that having sex to breed interesting light patterns was a lot more fun for everyone than tweaking knobs and sliders in a UI. Also, because traits are inherited through sexual reproduction, by the end of the event one started to see families of badges gaining similar traits, but thanks to the randomness inherent in sex you could still tell individuals apart in the dark by their light patterns.

Finding Friends

Implementing sex requires a peer-to-peer radio. So why not also use the radio to help people locate nearby friends? Seems like a good idea on the outside, but the design of this system is a careful balance between creating a general awareness of friends in the area vs. creating a messaging client.

Personally, one of the big draws of going to Burning Man is the ability to unplug from the Internet and live in an environment of intimate immediacy – if you’re physically present, you get 100% of my attention; otherwise, all bets are off. Email, SMS, IRC, and other media for interaction (at least, I hear there are others, but I don’t use them…) are great for networking and facilitating business, but they detract from focusing on the here and now. For me there’s something ironic about seeing a couple in a fancy restaurant, both hopelessly lost staring deeply into their smartphones instead of each other’s eyes. Being able to set an auto-responder for two weeks which states that your email will never be read is pretty liberating, and allows me to open my mind up to trains of thought that can take days to complete. Thus, I really wanted to avoid turning the badge into a chat client, or any sort of communication medium that sets any expectation of reading messages and responding in a timely fashion.

On the other hand, meeting up with friends at Burning Man is terribly hard. It’s life before the cell phone – if you’re old enough to remember that. Without a cell phone, you have a choice between enjoying the music, stalking around the venue to find friends, or dancing in one spot all night long so you’re findable. Simply knowing if my friends have finally showed up is a big help; if they haven’t arrived yet, I can get lost in the music and check out the sound in various parts of the venue until they arrive.

Thus, I designed a very simple protocol which will only reveal if your friends are nearby, and nothing else. Every badge emits a broadcast ping every couple of seconds. Ideally, I’d use an RSSI (receive signal strength indicator) to figure out how far the ping is, but due to a quirk of the radio hardware I was unable to get a reliable RSSI reading. Instead, every badge would listen for the pings, and decrement the ping count at a slightly slower average rate than the ping broadcast. Thus, badges solidly within radio range would run up a ping count, and as people got farther and farther away, the ping count would decrease as pings gradually get lost in the noise.


Friend finding UI in action. In this case, three other badges are nearby, SpacyRedPhage, hap, and happybunnie:-). SpacyRedPhage is well within range of the radio, and the other two are farther away.

The system worked surprisingly well. The reliable range of the radio worked out to be about 200m in practice, which is about the sound field of a major venue at Burning Man. It was very handy for figuring out if my friends had left already for the night, or if they were still prepping at camp; and there was one memorable reunion at sunrise where a group of my camp mates drove our beloved art car, Dr. Brainlove, to Robot Heart and I was able to quickly find them thanks to my badge registering a massive amount of pings as they drove into range.

Hardware Details

I’m not so lucky that I get to design such a complex piece of hardware exclusively for a pursuit as whimsical as Burning Man. Rather, this badge is a proof-of concept of a larger effort to develop a new open-source platform for networked embedded computers (please don’t call it IoT) backed by a rapid deployment supply chain. Our codename for the platform is Orchard.

The Burning Man badge was our first end-to-end test of Orchard’s “supply chain as a service” concept. The core reference platform is fairly well-documented here, and as you can see looks nothing like the final badge.


Bottom: orchard reference design; top: orchard variant as customized for Burning Man.

However, the only difference at a schematic level between the reference platform and the badge is the addition of 14 extra RGB LEDs, the removal of the BLE radio, and redesign of the captouch electrode pattern. Because the BOM of the badge is a strict subset of the reference design, we were able to go from a couple prototypes in advance of a private Crowd Supply campaign to 85 units delivered at the door of camp mates in about 2.5 months – and the latency of shipping units from China to front doors in the US accounts for one full month of that time.




The badge sports an interactive captouch surface, an OLED display, 900MHz ISM band peer-to-peer radio, microphone, accelerometer, and more!

If you’re curious, you can view documentation about the Orchard platform here, and discuss it at the Kosagi forum.

Reflection

As an engineer, my “default” existence is confined on four sides by cost, schedule, quality, and specs, with a sprinkling of legal, tax, and regulatory constraints on top. It’s pretty easy to lose your creative spark when every day is spent threading the needle of profit and loss.

Even though the implementation of Burning Man’s principles of decommodification and gifting is far from perfect, it’s sufficient to enable me to loosen the shackles of my daily existence and play with technology as a medium for enhancing human interactions, and not simply as a means for profit. In other words, thanks to the values of the community, I’m empowered and supported to build stuff that wouldn’t make sense for corporate shareholders, but might improve the experiences of my closest friends. I think this ability to leave daily existence behind for a couple weeks is important for staying balanced and maintaining perspective, because at least for me maximizing profit is rarely the same as maximizing happiness. After all, a warm smile and a heartfelt hug is priceless.

by bunnie at September 28, 2015 10:16 AM

September 26, 2015

ZeptoBARS

Diodes BC847BS - matched BJT pair : weekend die-shot

Diodes Incorporated BC847BS - pair of npn transistors with matched hFE. Internally it has 2 separate dies.
Die size 306x306 µm.



Second die:


Lithography repeatability is definitely better than this. Parameter matching is likely achieved by using adjacent dies from the wafer. 2 dies are used because one cannot place 2 BC847 transistors on the same die without significant changes to the technology (and it won't be BC847 anymore) - die bulk is transistor terminal.

Difference between the dies. Top metal is quite non-uniform optically (as usual) over the area, but this is unlikely to cause any electrical characteristics impact. Would be interesting to make similar difference photo for non-matched transistors.

September 26, 2015 01:22 PM

September 25, 2015

Free Electrons

Free Electrons at the Linux Kernel Summit 2015

Kernel Summit 2012 in San DiegoThe Linux Kernel Summit is, as Wikipedia says, an annual gathering of the top Linux kernel developers, and is an invitation-only event.

In 2012 and 2013, several Free Electrons engineers have been invited and participated to a sub-event of the Linux Kernel Summit, the “ARM mini-kernel summit”, which was more specifically focused on ARM related developments in the kernel. Gregory Clement and Thomas Petazzoni went to the event in 2012 in San Diego (United States) and in 2013, Maxime Ripard, Gregory Clement, Alexandre Belloni and Thomas Petazzoni participated to the ARM mini-kernel summit in Edinburgh (UK).

This year, Thomas Petazzoni has been invited to the Linux Kernel Summit, which will take place late October in Seoul (South Korea). We’re happy to see that our continuous contributions to the Linux Kernel are recognized and allow us to participate to such an invitation-only event. For us, participating to the Linux Kernel Summit is an excellent way of keeping up-to-date with the latest Linux kernel developments, and also where needed, give our feedback from our experience working in the embedded industry with several SoC, board and system vendors.

by Thomas Petazzoni at September 25, 2015 11:26 AM

September 24, 2015

ZeptoBARS

TL431 - adjustable shunt regulator : weekend die-shot

TL431 is another adjustable shunt regulator often used in linear supplies with external power transistor.
Die size 592x549 µm.


September 24, 2015 01:34 PM

September 18, 2015

Elphel

NC393 progress update: all hardware is operational

10393 with 4 image sensors

10393 with 4 image sensors



Finally all the parts of the NC393 prototype are tested and we now can make the circuit diagram, parts list and PCB layout of this board public. About the half of the board components were tested immediately when the prototype was built – it was almost two years ago – those tests did not require any FPGA code, just the initial software that was mostly already available from the distributions for the other boards based on the same Xilinx Zynq SoC. The only missing parts were the GPL-licensed initial bootloader and a few device drivers.

Implementation of the 16-channel DDR3 memory controller

Getting to the next part – testing of the FPGA-controlled DDR3 memory took us longer: the overall concept and the physical layer were implemented in June 2014, timing calibration software and application modules for image image recording and retrieval were implemented in the spring of 2015.

Initial image acquisition and compression

When the memory was proved operational what remained untested on the board were the sensor connections and the high speed serial links for SATA. I decided not to make any temporary modules just to check the sensor physical connections but to port the complete functionality of the image acquisition, processing and compression of the existing NC353 camera (just at a higher clock rate and multiple channels instead of a single one) and then test the physical operation together with all the code.

Sensor acquisition channels: From the sensor interface to the video memory buffer

The image acquisition code was ported (or re-written) in June, 2015. This code includes:

  • Sensor physical interface – currently for the existing 10338 12-bit parallel sensor front ends, with provisions for the up to 8-lanes + clock high speed serial sensors to be added. It is also planned to bond together multiple sensor channels to interface single large/high speed sensor
  • Data and clock synchronization, flexible phase adjustment to recover image data and frame format for different camera configurations, including sensor multiplexers such as the 10359 board
  • Correction of the lens vignetting and fine-step scaling of the pixel values, individual for each of the multiplexed sensors and color channel
  • Programmable gamma-conversion of the image data
  • Writing image data to the DDR3 image buffer memory using one or several frame buffers per channel, both 8bpp and 16bpp (raw image data, bypassing gamma-conversion) formats are supported
  • Calculation of the histograms, individual for each color component and multiplexed sensor
  • Histograms multiplexer and AXI interface to automatically transfer histogram data to the system memory
  • I²c sequencer controls image sensors over i²c interface by applying software-provided register changes when the designated frame starts, commands can be scheduled up to 14 frames in advance
  • Command frame sequencer (one per each sensor channel) schedules and applies system register writes (such as to control compressors) synchronously to the sensors frames, commands can be scheduled up to 14 frames in advance

JPEG/JP4 compression functionality

Image compressors get the input data from the external video buffer memory organized as 16×16 pixel macroblocks, in the case of color JPEG images larger overlapping tiles of 18×18 (or 20×20) pixels are needed to interpolate “missing” colors from the input Bayer mosaic input. As all the data goes through the buffer there is no strict requirement to have the same number of compressor and image acquisition modules, but the initial implementation uses 1:1 ratio and there are 4 identical compressor modules instantiated in the design. The compressor output data is multiplexed between the channels and then transferred to the system memory using 1 or 2 of Xilinx Zynq AXI HP interfaces.

This portion of the code is also based on the earlier design used in the existing NC353 camera (some modules are reusing code from as early as 2002), the new part of the code was dealing with a flexible memory access, older cameras firmware used hard-wired 20×20 pixel tiles format. Current code contains four identical compressor channels providing JPEG/JP4 compression of the data stored in the dedicated DDR3 video buffer memory and then transferring result to the system memory circular buffers over one or two of the Xilinx Zynq four AXI HP channels. Other camera applications that use sensor data for realtime processing rather than transferring all the image data to the host may reduce number of the compressors. It is also possible to use multiple compressors to work on a single high resolution/high frame rate sensor data stream.

Single compressor channel contains:

  • Macroblock buffer interface requests 32×18 or 32×16 pixel tiles from the memory and provides 18×18 overlapping macroblocks for JPEG or 16×16 non-overlapping macroblocks for JP4 using 4KB memory buffer. This buffer eliminates the need to re-read horizontally overlapping pixels when processing consecutive macroblocks
  • Pixel buffer interface retrieves data from the memory buffer providing sequential pixel stream of 18×18 (16×16) each macroblock
  • Color conversion module selects one of the sub-modules : csconvert18a, csconvert_mono, csconvert_jp4 or csconvertjp4_diff to convert possibly overlapping Bayer mosaic tiles to a sequence of 8×8 blocks for 2-d DCT transform
  • Average value extractor calculates average value in each 8×8 block, subtracts it before DCT and restores after – that reduces data width in DCT processing module
  • xdct393 performs 2-d DCT for each 8×8 pixel block
  • Quantizer re-orders each block DCT components from the scan-line to zigzag sequence and quantizes them using software-calculated and loaded tables. This is the only lossy stage of the JPEG algorithm, when the compression quality is set to 100% all the coefficients are set to 1 and the conversion is lossless
  • Focus sharpness module accumulates amount of high-frequency components to estimate image sharpness over specified window to facilitate (auto) focusing. It also allows to replace on-the-fly average block value of the image with amount of the high frequency components in the same blog, providing visual indication of the focus sharpness
  • RLL encoder converts the continuous 64 samples/per block data stream in to RLL-encoded data bursts
  • Huffman encoder uses software-generated tables to provide additional lossless compression of the RLL-encoded data. This module (together with the next one) runs and double pixel clock rate and has an input FIFO between the clock domains
  • Bit stuffer consolidates variable length codes coming out from the Huffman encoder into fixed-width words, escaping each 0xff byte (these bytes have special meaning in JPEG stream) by inserting 0×00 right after it. It additionally provides image timestamp and length in bytes after the end of the compressed data before padding the data to multiple of 32-byte chunks, this metadata has fixed offset before the 32-byte aligned data end
  • Compressor output FIFO converts 16-bit wide data from the bit stuffer module received at a double compressor clock rate (currently 200MHz) and provides 64-bit wide output at the maximal clock rate (150MHz) for AXI HP port of Xilinx Zynq, it also provides buffering when several compressor channels share the same AXI HP channel

Another module – 4:1 compressor multiplexer is shared between multiple compressor channels. It is possible (defined by Verilog parameters) to use either single multiplexer with one AXI HP port (SAXIHP1) and 4 compressor inputs (4:1), or two of these modules interfacing two AXI HP channels (SAXIHP1 and SAXIHP2), reducing number of concurrent inputs of each multiplexer to just 2 (2 × 2:1). Multiplexers use fair arbitration policy and consolidate AXI bursts to full 16×64bits when possible. Status registers provide image data pointers for last write and last frame start, each as sent to AXI and after confirmation using AXI write response channel.

Porting remaining FPGA functionality to the new camera

Additional modules where ported to complete the existing NC353 functionality:

  • Camera real time clock that provides current time with 1 microsecond resolution to various modules. It has accumulator-based correction circuitry to compensate for crystal oscillator frequency variations
  • Inter-camera synchronization module generates and/or receives synchronization signals between multiple camera modules or other devices. When used between the cameras, each synchronization pulse has a timestamp information attached in a serialized form, so multiple synchronized cameras have all the simultaneous images metadata contain the same time code generated by the “master” camera
  • Event logger records data from multiple sources, such as GPS, IMU, image acquisition events and external signal channel (like a vehicle wheel rotation sensor)

Simulating the full codebase

All that code was written (either new or modified from the existing NC353 FPGA project by the end of July, 2015 and then the most fun began. First I used the proven NC353 code to simulate (using Icarus Verilog + GtkWave) with the same input data as the one provided to the new x393 code, following the signal chains and making sure that each checkpoint data matched. That was especially useful when debugging JPEG compressor, as the intermediate data is difficult to follow. When I was developing the first JPEG compressor in 2002 I had to save output data from the various processing stages and compare it to the software compression output of the same image data from the similar stages. Having working implementation helped a lot and in 3 weeks I was able to match the output from all the processing stages described above except the event logger that I did not verify yet.

Testing the hardware

Then it was the time for translating the Verilog test fixture code to the Python programs running on the target hardware extending the code developed earlier for the memory controller. The code is able to parse Verilog parameter definition files – that simplified synchronization of the Verilog and Python code. It would be nice to use something like Cocotb in the future and completely get rid of the Verilog to Python manual translation.

As I am designing code for the reconfigurable FPGA (not for ASIC) my usual strategy is not to get high simulation coverage, but to simulate to a “barely working” stage, then use the actual hardware (that runs tens of millions times faster than the simulator), detect the problems and then try to achieve the same condition with the simulation. But when I just started to run the hardware I realized that there is too little I can get about the current state of the hardware. Remembering about the mess of the temporary debug code I had in the previous projects and the inability of the synthesis tool to directly access the qualified names of the signals inside sub-modules, I implemented rather simple debug infrastructure that uses a single register ring (like a simplified JTAG) through all the modules to debug and a matching Python code that allows access to individual bit fields of the ring. Design includes a single debug_master and debug_slave modules in each of the design module instances that needs debugging (and the modules above – up to the top one). By the time the camera was able to generate correct images the total debug ring consisted of almost a hundred of the 32-bit registers, when I later disabled this debug functionality by commenting out a single `define DEBUB_RING macro it recovered almost 5% of the device slices. The program output looks like:
x393 +0.001s--> print_debug 0x38 0x3e
038.00: compressors393_i.jp_channel0_i.debug_fifo_in [32] = 0x6e280 (451200)
039.00: compressors393_i.jp_channel0_i.debug_fifo_out [28] = 0x1b8a0 (112800)
039.1c: compressors393_i.jp_channel0_i.dbg_block_mem_ra [ 3] = 0x3 (3)
039.1f: compressors393_i.jp_channel0_i.dbg_comp_lastinmbo [ 1] = 0x1 (1)
03a.00: compressors393_i.jp_channel0_i.pages_requested [16] = 0x26c2 (9922)
03a.10: compressors393_i.jp_channel0_i.pages_got [16] = 0x26c2 (9922)
03b.00: compressors393_i.jp_channel0_i.pre_start_cntr [16] = 0x4c92 (19602)
03b.10: compressors393_i.jp_channel0_i.pre_end_cntr [16] = 0x4c92 (19602)
03c.00: compressors393_i.jp_channel0_i.page_requests [16] = 0x4c92 (19602)
03c.10: compressors393_i.jp_channel0_i.pages_needed [16] = 0x26c2 (9922)
03d.00: compressors393_i.jp_channel0_i.dbg_stb_cntr [16] = 0xcb6c (52076)
03d.10: compressors393_i.jp_channel0_i.dbg_zds_cntr [16] = 0xcb6c (52076)
03e.00: compressors393_i.jp_channel0_i.dbg_block_mem_wa [ 3] = 0x4 (4)
03e.03: compressors393_i.jp_channel0_i.dbg_block_mem_wa_save [ 3] = 0x0 (0)

Acquiring the first images

All the problems I encountered while trying to make hardware work turned out to be reproducible (but no always easy) with the simulation and the next 3 weeks I was eliminating then one by one. When I’ve got to the 51-st version of the FPGA bitstream file (there were several more when I forgot to increment version number) camera started to produce consistently valid JPEG files.

First 4-sensor image acquired with NC393 camera

First 4-sensor image acquired with NC393 camera

At that point I replaced a single sensor front end with no lens attached (just half of the input sensor window was covered with a tape to produce a blurry shadow in the images) with four complete SFE with lenses simultaneously using a piece of Eyesis4π hardware to point the individual sensors at the 45° angles (in portrait mode) covering 180°×60° FOV combined – it resulted in the images shown above. Sensor color gains are not calibrated (so there is visible color mismatch) and the images are not stitched together (just placed side-by-side) but i consider it to be a significant milestone in the NC393 camera development.

SATA controller status

Almost at the same time Alexey who is working on SATA controller for the camera achieved an important milestone too. His code running in Xilinx Zynq was able to negotiate and establish link with an mSATA SSD connected to the NC393 prototype. There is still a fair amount of design work ahead until we’ll be able to use this controller with the camera, but at least the hardware operation of this part of the design is verified now too.

What is next

Having all the hardware on the 10393 verified we are now able to implement minor improvements and corrections to the 3 existing boards of the NC393 camera:

  • 10393 itself
  • 10389 – extension board with mSATA SSD, eSATA/USB combo connector, micro-USB and synchronization I/O
  • 10385 – power supply board

And then make the first batch of the new cameras that will be available for other developers and customers.
We also plane to make a new sensor board with On Semiconductor (former Aptina, former Micron) MT9F002 – 14MPix sensor with the same 1/2.3″ image format as the MT9P006 used with the current NC353 cameras. This 12-bit sensor will allow us to try multi-lane high speed serial interface keeping the same physical dimension of the sensor board and use the same lenses as we use now.

by andrey at September 18, 2015 05:38 PM

September 13, 2015

Bunnie Studios

Name that Ware, September 2015

The Ware for September 2015 is shown below.

This is a little something I was gifted at Burning Man this year. I wore it around my neck for a week and then brought it back to my lab in Singapore and tore it apart. Obviously, it suffered some kind of severe trauma. I’m particularly enamored with the way the silicon melted — instead of revealing crystalline facets at the former wirebond pads, a smooth, remodeled and rather amorphous surface is revealed with rivulets of silicon radiating from the craters. Now that’s hot!

by bunnie at September 13, 2015 09:05 AM

Winner, Name that Ware August 2015

Last month’s ware is a controller board for a cutting machine, made by Polar-Mohr. The specific part number printed on the board is Polar SK 020162, which I’m guessing corresponds with this machine. Henry Valta pretty much nailed it, by guessing it as a Baum SK66 cutting circuit board. I’m not quite sure what the relationship is between Baumfolder and Polar-Mohr corporation, but it seems to be close enough that they share controller boards. Congrats, email me for your prize!

I do have to give a shout-out to zebonaut for noting the use of “V” designators for discrete semiconductors and linking it to German/DIN-compliant origins. I’m pretty good at picking out PCBs made by Japanese manufacturers, and this little factoid will now help me identify PCBs of EU/German design origin.

by bunnie at September 13, 2015 09:04 AM

September 12, 2015

Free Electrons

The quest for Linux friendly embedded board makers

Beagle Bone Black boardWe used to keep a list of Linux friendly embedded board makers. When this page was created in the mid 2000s, this page was easy to maintain. Though more and more products were created with Linux, it was still difficult to find good hardware platforms that were supported by Linux.

So, to help community members and system makers selecting hardware for their embedded Linux projects, we compiled a first selection of board makers that were meeting the below criteria:

  • Offering attractive and competitive products
  • At least one product supported Free Software operating systems (such as Linux, eCos and NetBSD.
  • At least one product meeting the above requirements, with a public price (without having to register), and still available on the market.
  • Specifications and documentation directly available on the website (no registration required). Engineers like to study their options on their own without having to share their contact details with salespeople who would then chase them through their entire life, trying to sell inappropriate products to them.
  • Website with an English version.

In the beginning, this was enough to reduce the list to 10-20 entries. However, as Linux continued to increase in popularity, and as hardware platform makers started to understand the value of transparent pricing and technical documentation, the criteria were no longer sufficient to keep the list manageable.

Therefore, we added another prerequisite: at least one product supported (at least partially) in the official version of the corresponding Free Software operating system kernel. This was a rather strong requirement at first, but only such products bring a guarantee for long term community support, making it much easier to develop and maintain embedded systems. Compare this with hardware supporting only a very old and heavily patched Linux kernel, for example, which software can only be maintained by its original developers. This also reveals the ability of the hardware vendor to work with the community and share technical information with its users and developers.

Then, with the development of low-cost community boards, and chip manufacturers efforts to support their hardware in the mainline Linux kernel, the list again became difficult to maintain.

The next prerequisite we could add is the availability as Open-source hardware, allowing customers to modify the hardware according to their needs. Of course, hardware files should be available without registration.

However, rather than keeping our own list, the best is to contribute to Wikipedia, which has a dedicated page on Open-Source computing hardware. At least, all the boards we could find are listed there, after adding a few.

Don’t hesitate to post comments to this page to share information about hardware which could be worth adding to this Wikipedia page!

Anyway, the good news is that Linux and Open-Source friendly hardware is now easier and easier to find than it was about 10 years back. Just have a preference for hardware that is supported in the mainline Linux kernel sources, or at least from a maker with earlier products which are already supported. A git grep -i command in the sources will help.

by Michael Opdenacker at September 12, 2015 05:21 PM

September 06, 2015

Video Circuits

DIY video VCO

Here are some shots of early XR2206 based video VCO experiments, the important thing with video is getting sync pulses from your SPG in to a format that your oscillator circuit wants to sync to, some are fine with narrow pulses some want a nice clean saw wave or for the pulse to hit a certain voltage threshold. This means if you don't have the skills to attempt at modifying whatever SPG or VCO you have chosen you will need sync conditioning circuits to sit in between getting the two to talk nicely.




by Chris (noreply@blogger.com) at September 06, 2015 09:43 AM

August 31, 2015

Free Electrons

Linux 4.2 released, Free Electrons contributions inside

Adelie Penguin
Linus Torvalds has released last sunday the 4.2 release of the Linux kernel. LWN.net covered the merge window of this 4.2 release cycle in 3 parts (part 1, part 2 and part 3), giving a lot of details about the new features and important changes.

In a more recent article, LWN.net published some statistics about the 4.2 development cycle. In those statistics, Free Electrons appears as the 10th contributing company by number of patches with 203 patches integrated, and Free Electrons engineer Maxime Ripard is in the list of most active developers by changed lines, with 6000+ lines changed. See also http://www.remword.com/kps_result/ for more kernel contribution statistics.

This time around, the most important contributions of Free Electrons where:

  • Support for Atmel ARM processors:
    • The effort to clean-up the arch/arm/mach-at91/ continued, now that the conversion to the Device Tree and multiplatform is completed. This was mainly done by Alexandre Belloni.
    • Support for the ACME Systems Arietta G25 was added by Alexandre Belloni.
    • Support for the RTC on at91sam9rlek was also added by Alexandre Belloni.
    • Significant improvements were brought to the dmaengine xdmac and hdmac drivers (used on Atmel SAMA5D3 and SAMA5D4), bringing interleaved support, memset support, and better performance for certain use cases. This was done by Maxime Ripard.
  • Support for Marvell Berlin ARM processors:
    • In preparation to the addition of a driver for the ADC, an important refactoring of the reset, clock and pinctrl driver was done by using a regmap and the syscon mechanism to more easily share the common registers used by those drivers. Worked done by Antoine Ténart.
    • An IIO driver for the ADC was contributed, which relies on the syscon and regmap mentioned above, as the ADC uses registers that are mixed with the clock, reset and pinctrl ones.
    • The Device Tree files were relicensed under GPLv2 and X11 licenses.
  • Support for Marvell EBU ARM processors:
    • A completely new driver for the CESA cryptographic engine was contributed by Boris Brezillon. This driver aims at replacing the old mv_cesa drivers, by supporting the newer features of the cryptographic engine available in recent Marvell EBU SoCs (DMA, new ciphers, etc.). The driver is backward compatible with the older processors, so it will be a full replacement for mv_cesa.
    • A big cleanup/verification work was done on the pinctrl drivers for Armada 370, 375, 38x, 39x and XP, leading to a number of fixes to pin definitions. This was done by Thomas Petazzoni.
    • Various fixes were made (suspend/resume improvements, big endian usage, SPI, etc.).
  • Support for the Allwinner ARM processors:
    • Support for the AXP22x PMIC was added by Boris Brezillon, including the support for the regulators provided by this PMIC. This PMIC is used on a significant number of Allwinner designs.
    • A small number of Device Tree files were relicensed under GPLv2 and X11 licenses.
    • A big cleanup of the Device Tree files was done by using more aggressively the “DT label based syntax”
    • A new driver, sunxi_sram, was added to support the SRAM memories available in some Allwinner processors.
  • RTC subsystem:
    • As was announced recently, Free Electrons engineer Alexandre Belloni is now the co-maintainer of the RTC subsystem. He has set up a Git repository at https://git.kernel.org/cgit/linux/kernel/git/abelloni/linux.git/ to maintain this subsystem. During the 4.2 release cycle, 46 patches were merged in the drivers/rtc/ directory: 7 were authored by Alexandre, and all other patches (with the exception of two) were merged by Alexandre, and pushed to Linus.

The full details of our contributions:

by Thomas Petazzoni at August 31, 2015 08:53 PM

Video Circuits

How Video Post-Production Effects were done in the 80s

Continuing the theme of effects videos, here is a nice one about 80s era video effects.

by Chris (noreply@blogger.com) at August 31, 2015 07:54 AM

August 19, 2015

Bunnie Studios

Name that Ware August 2015

The Ware for August 2015 is shown below.

I found this kicking around in the South China Material market this past June. It is indeed a production board (and still in use today!), so there is a definitive answer to this month’s challenge sitting somewhere in the cloud. The extensive use of CD4000 series CMOS chips in this board brings a little grin to my face — haven’t seen one of those in ages (except for the CD4066, which is still pretty handy even in contemporary situations).

Also, as a bonus, I found this in the same shop. This one isn’t for guessing, just for looking at. I’m a fan of FANUC.

As an administrative note, images from this site and the kosagi wiki, and a few other miscellaneous services, will be off-line for a bit on September 2nd. There’s maintenance work scheduled on the power grid at my flat, and so my servers will be brought off-line. If all goes well, it’ll be just 15 minutes. However, if the mains breaker to my unit doesn’t automatically reset, it could be up to a few hours before someone can get to it. I’ll be somewhere in Black Rock City, far from the Internet, while this all goes down…so if something really unfortunate happens, it could be a week before things get restored from backups.

by bunnie at August 19, 2015 10:31 AM

Winner, Name that Ware July 2015

The Ware for July 2015 was a bootlegged version of CAPCOM’s Carrier Air Wing. Congrats to pdw for nailing it, email me for your prize!

And a big thanks to Felipe Sanches for contributing last month’s ware and helping to judge the winner.

by bunnie at August 19, 2015 10:31 AM

August 16, 2015

Video Circuits

Video Screening in Tokyo

Alex organised a great screening in Tokyo check out the flyer




by Chris (noreply@blogger.com) at August 16, 2015 07:16 AM

August 10, 2015

ZeptoBARS

LM319M : weekend die-shot

LM319M - "high speed" (80ns) dual comparator.
Die size 2017x700 µm.


August 10, 2015 05:09 AM

August 03, 2015

Free Electrons

Free Electrons talks at the Embedded Linux Conference Europe

Father Mathew BridgeThe Embedded Linux Conference Europe 2015 will take place on October 5-7 in Dublin, Ireland. As usual, the entire Free Electrons engineering team will participate to the event, as we believe it is one of the great way for our engineers to remain up-to-date with the latest embedded Linux developments and connect with other embedded Linux and kernel developers.

The conference schedule has been announced recently, and a number of talks given by Free Electrons engineers have been accepted:

We submitted other talks that got rejected, probably since both of them had already been given at the Embedded Linux Conference in California: Maxime Ripard’s talk on dmaengine and Boris Brezillon’s talk on supporting MLC NAND (which we regret since Boris is currently actively working on this topic, so we are expecting to have some useful results by the time of ELCE, compared to his ELC talk which was mostly a presentation of the issues and some proposals to address them). Interested readers can anyway watch those talks and/or read the slides.

In addition to the Embedded Linux Conference Europe itself:

  • Thomas Petazzoni will participate to the Buildroot developers meeting on October 3/4, right before the conference.
  • Alexandre Belloni will participate to the OEDEM, the 2015 OpenEmbedded Developer’s European Meeting, taking place on October 9 after the conference.

by Thomas Petazzoni at August 03, 2015 12:08 PM

July 29, 2015

Elphel

NC393 progress update and a second life of the NC353 FPGA code

Another update on the development of the NC393 camera: finished adding FPGA code that re-implements functionality of the NC353 camera (just with additional multi-sensor capability), including JPEG/JP4 compressors, IMU/GPS logger and inter-camera synchronization. Next step – simulation and debugging, and it will use co-simulating of the same sensor image data using the code of the existing NC353 camera. This involves updating of that camera code to the state compatible with the development tools we use, and so the additional sub-project was spawned.

Verilog code development with VDT plugin for Eclipse IDE

Before describing the renovation of the NC353 camera FPGA code I need to tell about the software we use for the last year. Living in the world where FPGA chip manufactures have monopoly (or duopoly as there are 2 major players) on the rather poor software tools, I realize that this will not change in the short term. But it is possible to constrain those proprietary creations in the designated “cages” letting them do only certain tasks that require secret knowledge of the chip internals, but do not let them take control of the whole development process, depend on them abandoning one software environment and introducing another half-made one as soon as you’ll get used to the previous.

This is what VDT is about – it uses one of the most standard development environments – Eclipse IDE, combines it with a heavily modified version of VEditor and the Tool Specification Language that allows developers to integrate additional tools without getting inside the plugin code itself. Integration involves writing tool descriptions in TSL (this work is based on the tool manufacturer manual that specifies command options and parameters) and possibly creating custom parsers for the tool output – these programs may be written in any programming language developer is comfortable with.

Current integration includes the Free Software simulation programs (such as Icarus Verilog with GtkWave). As it is safe to rely on the Free Software we may add code specific to these programs in the plugin body to have deeper integration and combine code and waveforms navigation, breakpoints support.

For the FPGA synthesis and implementation tools this software supports Xilinx ISE and Vivado, we are now working on Altera Quartus too. There is no VDT code dependence on the specifics of each of these tools, and the tools are connected to the IDE using ssh and rsync, so they do not have to run on the same workstation.

Renovating the NC353 camera code

Initially I just planned to enter the NC353 camera FPGA code into VDT environment for simulation. When I opened it in this IDE it showed more than 200 warnings in the code. Most were just unused wires/registers and signal width mismatch that did not impact the functioning of the camera, but at least one was definitely a bug – a one that gets control in very rare occasions and so is difficult to catch.

When I fixed most of these warnings and made sure simulation works, I decided to try to run ISE 14.7 tools and generate a functional bitstream. There were multiple incompatibilities between ISE 10 (which was last used to generate a bitstream) and the current version – most modifications were needed to change description of the I/O standard and other parameters of the device pins (from constraint file and “// synthesis attribute …” in the code to modern style of using parameters.

That turned out to be doable – first I made the design agree with all the tools to the very last (bitstream generation), then reconciled the generated pad report with the one generated with old tools (there are still some differences remaining but they are understandable and OK). Finally I had to figure out that I need to turn on non-default option to use timing constraints and how to change the speed grade to match the one used with the old tools, and that resulted in a bitstream file that I tested on just one camera and got images. It was a second attempt – the first one resulted in a “kernel panic” and I had to reflash the camera. The project repository has the detailed description how to make such testing safe, but it is still better to try using your modified FPGA code only if you know how to “unbrick” the camera.

We’ll do more testing of the bit files generated by the ISE 14.7, but for now we need to focus on the NC393 development and use NC393 code as a reference for simulation.

Back to NC393

Before writing simulation test code for the NC393 camera, I made the code to pass all the Vivado tools and result in a bitfile. That required some code tweaking, but finally it worked. Of course there will be some code change to fix bugs revealed during verification, but most likely changes will not be radical. This assumption allows to see the overall device utilization and confirm that the final design is going to fit.

<
Table 1. NC393 FPGA Resources Utilization
Type Used Available Utilization(%)
Slice 14222 19650 72.38
LUT as Logic 31448 78600 40.01
LUT as Memory 1969 26600 7.40
LUT Flip Flop Pairs 44868 78600 57.08