copyleft hardware planet

December 21, 2018

Bunnie Studios

Exclave: Hardware Testing in Mass Production, Made Easier

Reputable factories will test 100% of every product shipped. For example, the computer or phone you’re using to read this has had a plug inserted in every connector, along with dozens of internal and external tests run to confirm everything from the correct operation of the CPU to the proper function of the buttons.


A test station at a motherboard factory (2x speed). Every port and connector gets tested.

Even highly automated processes can yield defective units: entropy happens, and constant vigilance is required to guard against it. Even a very stable manufacturing process with a raw defect rate of around 1% is considered unacceptable by any reputable brand. This is one of the elephants in the digital fabrication room – just because a tool is digital doesn’t mean it will fabricate things perfectly with a push of the button. Every tool needs maintenance, and more often than not a skilled operator is required to inspect the final product and polish over rough edges.

To better grasp the magnitude of the factory test problem, consider the software that’s loaded on your computer. How did it get in there? Devices come out of the silicon foundry mostly blank. They typically don’t even have the innate knowledge to traverse a filesystem, much less connect to the Internet to download an update. Yet everyone has had the experience of waiting for an update to download and install. Factories must orchestrate a much more time-consuming and complicated process to bootstrap every device made, in order for you to enjoy the privilege of connecting to the Internet to download updates.

One might think, “surely, there must be a standardized way for handling this”.

Shockingly, there isn’t.

How Not To Test a Product

Unfortunately, first-time product makers often make the assumption that either products don’t require 100% testing (because the boards are assembled by robots, and robots don’t make mistakes, right?), or there is some otherwise standardized way to handle the initial firmware upload. Once upon a time, I was called upon to intervene on a factory test for an Arduino-derivative product, where the original test specification was literally “plug the device into the USB port of [your] laptop, and type in this AVRDUDE command to load code, and then type in another AVRDUDE command to set the fuses, and then use a multimeter to check the voltages on these two test points”. The test documentation was literally two photographs of the laptop screen and a paragraph of text. The product’s designer argued to the factory that this was sufficient because it it’s really quick and reliable: he does it in under two minutes, how could any competent factory that handles products with AVR chips not have heard of AVRDUDE, and besides he encountered no defects in the half dozen prototypes he produced by hand. This is in addition to an over-arching attitude of “whatever, I’m the smart guy who comes up with the ideas, just get your minimum-wage Chinese laborers to stop messing them up”.

The reality is that asking someone to manually run commands from a shell and read a meter for hours on end while expecting zero defects is neither humane nor practical. Furthermore, assuming the ability and judgment to run command line scripts isn’t realistic; testing is time-consuming, and thus often the least-skilled, lowest wage laborers are employed for the process. Ironically, there is no correlation between the skills required to assemble a computer, and the skills required to operate a computer. Thus, in order for the factory to meet the product designer’s expectation of low labor cost with simultaneously high quality, it’s up to the product designer to come up with an automated, fool-proof test jig.

Introducing the Test Jig: The Product Behind the Product

“Test jig” is a generic term any tool designed to assist with production testing. However, there is a basic format for a test jig chassis, and demand for test jig chassis is so high in places like Shenzhen that entire cottage industries have sprung up to support the demand. Most circuit board test jigs look a bit like this:


Above: NeTV2 circuit board test jig

And the short video below highlights the spring-loaded pogo pins of the test jig, along with how a circuit board is inserted into a test jig and clamped in place for testing.


Above: Inserting an NeTV2 PCB into its test jig.

As you can see in the video, the circuit board is placed into a precision-milled platter that moves along spring-loaded rails, allowing the board to engage with pogo-pin style test points underneath. As test points consume precious space on the circuit board, the overall mechanical accuracy of the system has to be better than +/-1mm once all tolerances are considered over thousands of cycles of wear and tear, in order to keep the test points a reasonable size (under 2mm in diameter).

The specific test jig shown above measures 12 separate DC voltages, performs a basic JTAG ID code check on the FPGA, loads firmware, and tests the on-board DRAM all in under 20 seconds. It’s the preliminary “fast test” of the NeTV2 product, meant to screen out gross solder faults and it provides an estimated coverage of about 80% of the solder joints on the PCB. The remaining 20% of the solder joints belong principally to connectors, which require a much more labor-intensive manual test to check.

Here’s a look inside the test jig:

If it looks complicated, that’s because it is. Test jig complexity is correlated with product complexity, which is why I like to say the test jig is the “product behind the product”. In some cases, a product designer may spend even more time designing a test jig than they spend designing the product itself. There’s a very large space of problems to consider when implementing a test jig, ranging from test coverage to operator fatigue, and of course throughput and reliability.

Here’s a list of the basic issues to consider when designing a test jig:

  • Coverage: How to test every single feature?
  • UX: Who is interpreting your test data? How to internationalize the UI by using symbols and colors instead of text, and how to minimize operator fatigue?
  • Automation: What’s the quickest way to set up and tear down tests? How to avoid relying on human judgment?
  • Audit & traceability: How do you enforce testing standards? How to incorporate logging and coupons to facilitate material traceability?
  • Updates: What do you do when the tester needs a patch or update? How do you keep the test program in lock-step with the current firmware release?
  • Responsibility: Who is responsible for product quality? How do you create a natural incentive to design-for-test from the very first product sketch?
  • Code Structure: How do you maintain the tester’s code base? It’s tempting to think that test jig code should be write-once, since it’s going into a single device with a limited user base. However, the reality of production is rarely so simple, and it pays to structure your code base so that it’s self-checking, modular, reconfigurable, and reliable.

Each of these bullet points are aspects of test jig design that I have learned from the school of hard knocks.

Read on, and avoid my mistakes.

Coverage

Ideally, a tester should cover 100% of the features of a product. But what, exactly, constitutes a feature? I once designed a product called the Chumby One, and I also designed its test procedure. I tried my best to cover all of its features, but I missed one: the power button. It seemed simple enough – just a switch, what could go wrong? It turns out that over the course of production, the tolerance between the mechanical switch pusher and the electrical switch mechanism had drifted to the point where pushing on the cap would not contact the electrical switch itself, leading to a cohort of returns from that production lot.

Even the simplest of mechanisms is a feature that needs to be tested.

Since that experience, I’ve adopted an “inside/outside” methodology to derive the test feature list. First, I look “inside” the product, going through the schematic and picking key features for testing. The priority is to check for solder faults as quickly as possible, based on the assumption that the constituent components are 100% pre-tested and reliable. Then, I look at the product from the “outside”, as a consumer might approach it. First, I look at the marketing brochure and see what was promised: “world class WiFi performance” demands a different level of test from “product has WiFi”. Then, I try to imagine all the ways a customer might interact with the product – such as pressing the power button – and add those points to the test list. This means every connector needs to have something stuffed in it, every switch pressed, every indicator light must get checked.


Red arrow calls out the mechanical switch pusher that drifted out of tolerance with the corresponding electrical switch

UX

Test jig UX can have a large impact on test throughput and reliability; test operators are human, and like all humans are susceptible to fatigue and boredom. A startup I worked with once told me a story of how a simple UX change drastically improved test throughput. They had a test that would take 10 minutes on average to run, so in order to achieve a net throughput of around 1 minute per unit, they provided the factory 10 testers. Significantly, the test run-time would vary from unit to unit, with a variance of several minutes from unit to unit. Unfortunately, the only indicator of test state was a single light that could either flash or change color. Furthermore, the lighting pattern of units that failed testing bore a resemblance to units that were still running the test, so even when the operator noticed a unit that finished testing, they would often overlook failed units, assuming they were still running the test. As a result, the actual throughput achieved on their first production run was about one unit every 5 minutes — driving up labor costs dramatically.

Once the they refactored the UX to include an audible chime that would play when the test was finished, aggregate test cycle time dropped to a bit over a minute – much closer to the original estimate.

Thus, while one might think UX is just for users, I’ve found it pays to make wireframes and mock-ups for the tester itself, and to spend some developer cycles to create an operator-friendly test program. In some ways, tester UX design is more challenging than the product UX: ideally, you’re creating a UX with icons that are internationally recognizeable, using little or no text, so operators anywhere in the world can just sit down and use it with no special training. Furthermore, you’re trying to create user engagement with something as banal as a test – something that’s literally as boring as watching paint dry. I’ve even gone so far as putting a mini-game in the middle of a long test sequence to keep operators attentive. The mini-game was of course directly relevant to the testing certain hardware sensors, but it was surprisingly effective because the operators would race each other on the mini-game to see who could finish the fastest, boosting throughput and increasing worker happiness.

At the end of the day, factories are powered by humans, and it pays to employ a human-first design process when crafting test programs.

Automation

Human operators are prone to error. The more a test can be automated, the more reliable it can be, and in the long run automation will save money. I once visited a large mobile phone maker’s factory, and witnessed a gymnasium-sized room full of test stations replaced by a pair of fully robotic test stations. Instead of hundreds of operators plugging cables in and checking aspects like screen and camera quality, a delicate ballet of robotic actuators would plug connectors into every port in a fraction of a second, and every feature of the phone from the camera to the GPS is tested in a couple of minutes. The test stations apparently cost about a million dollars to develop, but the empty cavern of idle test jigs sitting next to it was clear testament to the labor cost savings of such a high degree of automation.

At the smaller scales more typical of startups, automation can happen but it needs to be judiciously applied. Every robotic actuator takes time and money to develop, and they are also prone to wear-out and eventual failure. For the Chibitronics Chibi Chip product, there’s a single mechanical switch on the board, and we developed a simple servo mechanism to actuate the plunger. However, despite using a series-elastic spring and a foam pad to avoid over-stressing the servo motor, over time, we’ve found the motor still fails, and operators have disconnected it in favor of manually pushing the button at the right time.


The Chibi Chip test jig


Detail view of the reset switch servo

Indicator lights can also be tricky to test because the lighting conditions in a factory can be highly variable. Sometimes the floor is flooded by sunlight; other times, it’s lit by dim fluorescent lamps or LED bulbs, each with distinct noise signatures. A simple photodetector will be unreliable unless you can perfectly shield the device under test (DUT) from stray light sources. However, if the product’s LEDs can be modulated (with a PWM waveform, for example), the modulation can be detected through an AC-coupled photodetector. This system tends to be more reliable as the AC coupling rejects sunlight, and the modulation frequency can be chosen to be distinct from other stray light noise sources in the factory.

In general, the gold standard for test automation is to put the DUT into a jig, press a button, wait, and then a red or green light indicates if the device passes or fails. For simple products, this should be achievable, but reasonable exceptions should be made depending upon the resources available in a startup to implement tests versus the potential frequency and impact of a particular feature escaping the test process. For example, in the case of NeTV2, the functionality of indicator LEDs and the fan are visually inspected by the operator; but in my judgment, all the components involved have generous tolerances and are less likely to be assembled incorrectly, and there are other points downstream of the PCB test during the assembly process where the LEDs and fan operation will be checked yet again, further reducing the likelihood of these features escaping the test process.

Audit and Traceability

Here’s a typical failure scenario at a factory: one operator is running two testers in parallel. The lunch bell rings, and the operator gets up and leaves without noting the status of the test (if you’ve been doing the same thing over and over for the past four hours and running on an empty belly, you’d do the same thing too). After lunch, the operator sits down again, and has to recall whether the units in front of her have been tested or not. As a result of this arbitrary judgment call, sometimes units that didn’t pass test, or weren’t even tested at all, slip into the tested product bins after a shift change.

This is one of the many reasons why it pays to incorporate some sort of audit and traceability program into the tester and product itself. The exact nature of the program will depend greatly upon the exact nature of the product and amount of developer resources available, but a simple example is structuring the test program so that a serial number isn’t generated for the product until all the tests pass – thus, the serial number is a kind of “coupon” to prove the unit has passed test. In the operator-returning-from-lunch scenario, she just has to check for the presence of a serial number to determine the testing state of a particular unit.


The Chibi Chip uses Bitmarks as a coupon to indicate when they have passed test. The Bitmarks also help prevent warranty fraud and deters cloning.

Sometimes I also burn a log of the test into the product itself. It’s important to make the log a circular buffer that can store more than one test run, because often times products that fail test the first time must be retested several times as it’s reworked and repaired. This way, if a product is returned by a user, I can query the log and see a fairly complete history of the product’s rework experience in the factory. This is incredibly helpful in debugging factory process issues and holding the factory accountable for marginal practices such as re-testing a device multiple times without repairing it, with the hope that they get lucky and get a “pass” out of the tester due to random environmental fluctuations.

Ideally, these logs are sent up to the cloud or a server directly, but that will depend heavily upon the reliability of the Internet connectivity at your facility. Internet is notoriously unreliable in China, especially to servers not located on the mainland, and so sometimes a small startup with limited resources has to make compromises about the extent and nature of audit and traceability achievable on the factory floor.

Updates

Consumer electronic products are increasingly just software wrapped in a plastic shell. While the hardware itself must stabilize months before production, the software in a product continues to evolve, especially in Internet-connected products that support over-the-air updates. Sometimes patches to a product’s firmware can profoundly alter low-level APIs, breaking the factory test program. For example, I had a product once where the audio drivers went through a major upgrade, going from OSS to ALSA. This changed the way the microphone subsystem was accessed, causing the microphone test to fail in production. Thus user firmware updates can also necessitate a tester program update.

If a test jig was engineered as a stand-alone box that requires logging into a terminal to upgrade, every time the software team pushes an update, guess what – you’re hopping on a plane to the factory to log in to the test jig and upgrade it. This is not a sustainable upgrade plan for products that have complex, constantly evolving internal firmware; thus, as the test jig designer, it’s well-advised to build a secure remote upgrade process into the test jig itself.


That’s me about 12 years ago on a factory floor at 2AM debugging a testjig update gone wrong, bringing production to a screeching halt. Don’t be like me; you can do better!

In addition a remote upgrade mechanism, you’re going to need a way to validate the test jig update without having to bring down a production line. In order to help with this, I always keep a physical copy of the production test jig in my office, so I can validate testjig updates from the comfort of my office before pushing them to the production floor. I try my best to keep the local jig an exact copy of what’s on the line; this may involve taking snapshots of the firmware image or swapping out OS drives between development and production versions, or deliberately breaking features that have somehow failed on the production jigs. This process is inspired by the engineers at JPL and NASA who keep an exact copy of Mars-based rovers on Earth, so they can thoroughly test an update before pushing it to the rover on Mars. While this discipline can be inconvenient and incurs the cost of an extra test jig, it’s inevitably cheaper than having to book a last minute flight to your factory to fix things because of an update gone wrong.

As for the upgrade mechanism itself, how fancy and secure you want to get has virtually no limit; I’ve done everything from manual swaps of USB thumb drives that contain the tester configuration data to a private VPN via a dedicated 3G-to-wifi gateway deployed at the factory site. The nature of the product (e.g. does it contain security keys, how often is the product firmware updated) and the funding level of your organization will heavily influence the architecture of the upgrade process.

Responsibility

Given how much effort it takes to build a good test jig, it’s tempting to free up precious developer resources by simply outsourcing the test jig to a third party. I’ve almost never found this to be a good idea. First of all, nobody but the developer knows what skeletons are hidden in a product’s closet. There’s what’s written in the spec, but then there is how faithfully the spec was implemented. Of course, in an ideal world, all specs were perfectly met, but only the developer has a true sense of how spot-on the implementation ended up. This drives the second point, which is avoiding the blame game. By throwing tests over the fence to a third party, if a test isn’t easy to implement or is generating false results, it’s easy to get into a finger-pointing exercise over who is at fault: the developer for not meeting the specs, or the test developer for not being creative enough to implement the test without necessitating design changes.

However, when the developer knows they are ultimately on the hook for the test jig, from day one the developer thinks about design for test. Where will the test points go? How do we make internal state easily visible? What bring-up sequence gives us the most test coverage in the shortest amount of time? By making the developer responsible for the test jig, the test program comes together as the product matures. Bring-up scripts used to validate the product are quickly converted to factory tests, and overall the product achieves a higher standard of testability while saving the money and resources that would otherwise be spent trying to coordinate between two parties with conflicting self-interests.

Code Structure

It’s tempting to think about a test jig as a pile of write-once code that doesn’t need to be maintainable. For simple products, one can definitely get away with this mentality. However, I’ve been bitten more than once by fragile code bases inside production testers. The most typical scenario where things break is when I have to change the order of tests, in order to prioritize testing problematic features first. It doesn’t make sense to test a dozen high-yielding features before running a test on a feature with a known yield issue. That just wastes operator time, and runs up the cost of production.

It’s also hard to predict before production what the most frequent mode of failure would be – after all, any failures you could have anticipated would already be designed out! So, quite often in the middle of an early production run, I’m challenged with having to change the order of tests in a complex sequence of tests to optimize operator time and improve production throughput.

Tests almost always have dependencies – you have to power on the board before you can flash the firmware; you need firmware before you can connect to wifi; you need credentials to connect to wifi; you have to clean up the test credentials before shipping the product. However, if the process that cleans up the test credentials is also responsible for cleaning up any other temporary tester files (for example, a flag that also sets Bluetooth into test mode), moving the wifi test sequence earlier could result in tester configuration files being left on the customer image, potentially leading to unexpected behaviors (such as Bluetooth still being in test mode in the shipping product!).

Thus, it’s helpful to have some infrastructure for tests that keeps each test modular while enforcing dependencies. Although one could write this code every single time from scratch, we encounter this problem so regularly that Sean ‘Xobs’ Cross set out to create a testjig management system to solve this problem “once and for all”. The result is a project he calls Exclave, with the idea being that Exclave – like an actual geographical exclave – is a tiny bit of territory that you can retain control of inside a foreign factory.

Introducing Exclave

Exclave is a scaffold designed to give structure to an otherwise amorphous blob of test code, while minimizing the amount of overhead required of the product designer to achieve this structure. The basic features of Exclave are as follows:

  • Code Re-use. During product bring-up, designers write simple scripts to validate each feature individually. Exclave attempts to re-use these scripts by making no assumption about the language used to write them. Python, C, Bash, Node.js, Rust – all are welcome, so long as they run on a command line and can return an exit code.
  • Automated dependency resolution. Each test routine is associated with a “.test” descriptor which describes the dependencies and timeout for a given script, which are then automatically resolved by Exclave.
  • Scenario management. Test descriptors are strung together into scenarios, which can be selected dynamically based on the real-time requirements of the factory.
  • Triggers. Typically a test is started by pressing a button, but Exclave’s flexible triggering system also allows tests to start on other cues, such as hot-plug events.
  • Multiple UI targets. Test jig UI can range from a red/green light to a serial console device to a full graphical interface running on a monitor. Exclave has a system for interpreting test results and driving multiple UI sinks. This allows for fast product debugging by attaching a GUI (via an HDMI monitor or laptop) while maintaining compatibility with cost-efficient LED indicators favored for production scale-up.


Above: Exclave helps migrate lab-bench validation code to production-grade factory tests.

To get a little flavor on what Exclave looks like in practice, let’s look at a couple of the tests implemented in the NeTV2 production test flow. First, the production test is split into two repositories: the test descriptors, and the graphical UI. Note that by housing all the tests in github, we also solve the tester upgrade problem by providing the factory with a set git repo management scripts mapped to double-clickable desktop icons.

These repositories are installed on a Raspberry Pi contained within the test jig, and Exclave is started on boot as a systemd service. The service runs a simple script that fires up Exclave in a target directory which contains a “.jig” file. The “netv2.jig” file specifies the default scenario, among other things.

Here’s an example of what a quick test scenario looks like:

This scenario runs a variety of scripts in different languages that: turn on the device (bash/C), checks voltages (C), checks ID code of the FPGA (bash/openOCD), loads a test bitstream (bash/openOCD), checks that the REPL shell can start on the FPGA (Expect/TCL), and then runs a RAM test (Expect/TCL) before shutting the board down (bash/C). Many of these scripts were copied directly from code used during board bring-up and system validation.

A basic operation that’s surprisingly tricky to do right is checking for terminal interaction (REPL shell) via serial port. Writing a C or bash script that does this correctly and gracefully handles all error cases is hard, but fortunately someone already solved this problem with the “Expect” TCL extension. Here’s what the REPL shell test descriptor looks like in Exclave:

As you can see, this points to a couple other tests as dependencies, sets a time-out, and also designates the location of the Expect script.

And this is what the Expect script looks like:

This one is a bit more specialized to the NeTV2, but basically, it looks for the NeTV2 tester firmware shell prompt, which is “TESTER_NX8D>”; the system will attempt to recover this prompt by sending a carriage-return sequence once every two seconds and searching for this special string in return. If it receives the string “BIOS” instead, this indicates that the NeTV2 failed to boot and escaped into the ROM BIOS, probably due to a RAM error; at which point, the Expect script prints a bunch of JSON, which is automatically passed up to the UI layer by Exclave to create a human-readable error message.

Which brings us to the interface layer. The NeTV2 jig has two options for UI: a set of LEDs, or an HDMI monitor. In an ideal world, the total amount of information an operator needs to know about a board is if it passed or failed – a green or red LED. Multiple instances of the test jig are needed when a product enters high volume production (thousands of units per day), so the cost of each test jig becomes a factor during production scale-up. LEDs are orders of magnitude cheaper than an HDMI monitor, and in general a test jig will cost less than an HDMI monitor. So LEDs instead of an HDMI monitor for UI can dramatically slash the cost to scale up production. On the other hand, a pair of LEDs does not give enough information to diagnose what’s gone wrong with a bad board. In a volume production scenario, one would typically collect the (hopefully small) fraction of failed boards and bring them to a secondary station where a more skilled technician debugs them. Exclave allows the same jig used in production to be placed at the debug station, but with an HDMI monitor attached to provide valuable detailed error reports.

With Exclave, both UI are integrated seamlessly using “.interface” files. Below is an example of the .interface file that starts up the http daemon to enable JSON debugging via an HDMI monitor.

In a nutshell, Exclave contains an event reporting system, which logs events in a fashion similar to Linux kernel messages. Events are tagged with metadata, such as severity, and the events are broadcast to interface handlers that further refine them for the respective UI element. In the case of the LEDs, it just listens for “START” [a scenario], “FAIL” [a test], and “FINISH” [a scenario] events, and ignores everything else. In the case of the HDMI interface, a browser configured to run in kiosk mode is pointed to the correct localhost webpage, and a jquery-based HTML document handles the dynamic generation of the UI based upon detailed messages from Exclave. Below is a screenshot of what the UI looks like in action.

The UI is deliberately brutalist in design, using color to highlight only the most important messages, and also includes audible alerts so that operators can zone out while the test runs.

As you can see, the NeTV2 production tester tests everything – from the LEDs to the Ethernet, to features that perhaps few people will ever use, such as the SD card slot and every single GPIO pin. Thanks to Exclave, I was able to get this complex set of tests up and running in under a month: the first code commit was made on Oct 13, 2018, and by Nov 7, I was largely just tweaking tests for performance, and to reflect operational realities discovered on the factory floor.

Also, for the hardware-curious, I did design a custom “hat” for the Raspberry Pi to add several ADC channels and various connectors to facilitate testing. You can check out the source for the tester hat at the Alphamax github repo. I had six of these boards built; five of them have found their way into various parts of the NeTV2 production flow, and if I still have one spare after production is stabilized, I’m planning on installing a replica of a tester at HAX in Shenzhen. That way, those curious to find out more about Exclave can walk up to the tester, log into it, and poke around (assuming HAX agrees to this).

Let’s Stop Re-Inventing the Test Jig!
The unspoken secret of hardware is that behind every product, there’s a robust test jig making sure that every unit shipped to end customers meets quality standards. Hardware startups that don’t anticipate the importance and difficulty of creating such a tester often encounter acute (and sometimes fatal) growing pains. Anytime I build more than a few copies of a piece of hardware, I know I’m going to need a test jig – even for bespoke, short-run products like a conference badge.

After spending months of agony re-inventing the wheel every time we shipped a product, Xobs decided to create Exclave. It’s still a work in progress, but by now it’s been used as the production test infrastructure for several volume products, including the Chibi Chip, Chibi Scope, Tomu, The Phage Blinky Badge, and now NeTV2 (those are all links to the actual Exclave test scripts for each of the respective products — open source ftw!). I feel Exclave has come along far enough that it’s time to invite more users to join the Exclave community and give it a try. The code is located on github and is 100% open source, and it’s written in Rust entirely by Xobs. It’s my hope that Exclave can mature into a tool and a community that will save countless Makers and small hardware startups the teething pains of re-inventing the test jig.


Production-proven testjigs that run Exclave. Clockwise from top-right: NeTV2, Chibi Chip, Chibi Scope, Tomu, and The Phage Blinky Badge. The badge tester has even survived a couple of weeks exposed to the harsh elements of the desert as a DIY firmware updating station!

by bunnie at December 21, 2018 06:23 AM

December 16, 2018

Bunnie Studios

Name that Ware December 2018

The Ware for December 2018 is shown below.

Finishing off the year with a (hopefully) easy one that’s slightly off the beaten path.

Happy holidays! Stay safe, and stay free.

by bunnie at December 16, 2018 12:06 PM

Winner: Name that Ware November 2018

The Ware for November 2018 is a bias/control board for the HP 2-18GHz YIG-tuned multiplier. I really appreciate this fascinating ware, it reminds me that the MOS transistor is not the be-all and end-all of electronics. Of course, every day we encounter crystals as frequency references, and those are literally shaved pieces of quartz, but here is a sphere of Yttrium Iron Garnet (YIG) being used as a tunable RF filter. Thanks to phantom deadline for contributing this ware, and also congrats to Brian for nailing the ware. Email me for your prize!

by bunnie at December 16, 2018 12:05 PM

December 06, 2018

Bunnie Studios

On Overcoming Pain

Breaking my knee this year was a difficult experience, but I did learn a lot from it. I now know more than I ever wanted to know about the anatomy of my knee and how the muscles work together to create the miracle of bipedal locomotion, and more importantly, I now know more about pain.

Pain is one of those things that’s very real to the person experiencing it, and a person’s perception of pain changes every time they experience a higher degree and duration of pain. Breaking my knee was an interesting mix of pain. It wasn’t the most intense pain I had ever felt, but it was certainly the most profound. Up until now, my life had been thankfully pain-free. The combination of physical pain, the sheer duration of the pain (especially post-surgery), and the corresponding intellectual anguish that comes from the realization that my life has changed for the worse in irreversible ways made this one of the most traumatizing experiences of my life. Despite how massive the experience was to me, I’m also aware that my experience is relatively minor compared to the pains that others suffer. This sobering realization gives me a heightened empathy for others experiencing great pain, or even modest amounts of pain on a regular basis. Breaking a knee is nothing compared to having cancer or a terminally degenerative disease like Alzheimer’s: at least in my case, there is hope of recovery, and that hope helped keep me going. However, a feeling of heightened empathy for those who suffer has been an important and positive outcome from my experience, and sharing my experiences in this essay is both therapeutic for me and hopefully insightful for others who have not had similarly painful life experiences.

I broke my knee on an average Saturday morning. I was wearing my paddling gear, walking to a taxi stand with my partner, heading for a paddle around the islands south of Singapore. At the time, my right knee was recovering from a partial tear of the quadriceps tendon; I had gone through about six weeks of immobilization and was starting physical therapy to rebuild the knee. Unfortunately that morning, one of the hawker stalls that line the alley to the taxis had washed its floor, causing a very slick soup of animal grease and soapy water to flood into the alley. I slipped on the puddle, and in the process of trying to prevent my fall, my body fully tore the quadriceps tendon while avulsing the patella – in other words, my thigh had activated very quickly to catch my fall, but my knee wasn’t up for it, and instead of bearing the load, the knee broke, and the tissue that connected my quads muscle to my knee also tore.

It’s well documented that trauma imprints itself vividly onto the brain, and I am no exception. I remember the peanut butter sandwich I had in my hand. The hat I was wearing. The shape and color of the puddle I slipped on. The loud “pop” of the knee breaking. The writhing on the floor for several minutes, crying out in pain. The gentlemen who offered to call an ambulance. The feeling of anguish – after six weeks in therapy for the partial tear, now months more of therapy to fix this, if fixable at all. I was looking forward to rebuilding my cardiovascular health, but that plan was definitely off. Then the mental computations about how much travel I’m going to have to cancel, the engagements and opportunities I will miss, the work I will fall behind upon. Not being able to run again. Not being able to make love quite the same way again. The flight of stairs leading to my front door…and finally, my partner, who was there for me, holding my hand, weeping by my side. She has been so incredibly supportive through the whole process, I owe my good health today to her. To this day, my pulse still rises when I walk through the same alley to the taxi. But I do it, because I know I have to face my fears to get over the trauma. My partner is almost always there with me when I walk through that particular alley, and her hand in mine gives me the strength I lack to face that fear. Thank you.

Back to the aspect of pain. Breaking the knee is an acute form of pain. In other words, it happens quickly, and the intensity of the pain drops fairly quickly. The next few days are a blur – initially, the diagnosis is just a broken kneecap, but an MRI revealed I had also torn the tendon. This is highly unusual; usually a chain fails at one link, and this is like two links of a chain failing simultaneously. The double-break complicates the surgery – now I’m visiting surgeons, battling with the insurance company, waiting through a three-day holiday weekend, with the knowledge that I have only a week or two before the tendon pulls back and becomes inoperable. I had previously written about my surgical experience, but here I will recap and reframe some of my experiences on coping with pain.

Pain is a very real thing to the person experiencing it. Those who haven’t felt a similar level of pain to the person suffering from pain can have trouble empathizing. In fact, there was no blood or visible damage to my body when I broke my knee – one could have also possibly concluded I was making it all up. After all, the experience is entirely within my own reality, and not those of the observers. However, I found out that during surgery I was injected with Fentanyl, a potent opioid pain killer, in addition to Propofol, an anesthetic. I asked a surgeon friend of mine why they needed to put opioids in me even though I was unconscious. Apparently, even if am unconscious, the body has autonomous physiological responses to pain, such as increased bleeding, which can complicate surgery, hence the application of Fentanyl. Fentanyl is fast-acting, and wears off quickly – an effect I experienced first-hand. Upon coming out of the operation room, I felt surprisingly good. One might almost say amazing. I shouldn’t have, but that’s how powerful Fentanyl is. I had a six-inch incision cut into me and my kneecap had two holes drilled through it and sutures woven into my quads, and I still felt amazing.

Until about ten minutes later, when the Fentanyl wore out. All of a sudden I’m a mess – I start shivering uncontrollably, I’m feeling enormous amounts of pain coming from my knee; the world goes hazy. I mistake the nurse for my partner. I’m muttering incoherently. Finally, they get me transferred to the recovery bed, and they give me an oral mix of oxycodone and nalaxone. My experience with oxycodone gives me a new appreciation of the lyrics to Pink Floyd’s “Comfortably Numb”:

There is no pain, you are receding
A distant ship smoke on the horizon
You are only coming through in waves
Your lips move but I can’t hear what you’re saying

That’s basically what oxycodone does. Post-op surgical pain is an oppressive cage of spikes wrapping your entire field of view, every where you look is pain…as the oxycodone kicks in, you can still see the spikey cage, but it recedes until it’s a distant ship smoke on the horizon. You can now objectify the pain, almost laugh at it. Everything feels okay, I gently drift to sleep…

And then two hours later, the nalaxone kicks in. Nalaxone is an anti-opioid drug, which is digested more slowly than the oxycodone. The hospital mixes it in to prevent addiction, and that’s very smart of them. I’ve charted portions of my mental physiology throughout my life, and that “feeling okay” sensation is pretty compelling – as reality starts to return, your first might be “Wait! I’m not ready for everything to not be okay! Bring it back!”. It’s not euphoric or fun, but the sensation is addictive – who wouldn’t want everything to be okay, especially when things are decidedly not okay? Nalaxone turns that okay feeling into something more akin to a bad hangover. The pain is no longer a distant ship smoke on the horizon, it’s more something sitting in the same room with you staring you down, but with a solid glass barrier between you and it. Pain no longer consumes your entire reality, but it’s still your bedfellow. So my last memory of the drug isn’t a very fond one, and as a result I don’t have as much of an urge to take more of it.

After about a day and a half in the hospital, I was sent home with another, weaker opioid-based drug called Ultracet, which derives most of its potency from Tramadol. The mechanism is a bit more complicated and my genetic makeup made dosing a bit trickier, so I made a conscious effort to take the drug with discipline to avoid addiction. I definitely needed the pain killers – even the slightest motion of my right leg would result in excruciating pain; I would sometimes wake up at night howling because a dream caused me to twitch my quads muscle. The surgeon had woven sutures into my quads to hold my muscle to the kneecap as the tendon healed, and my quads were decidedly not okay with that. Fortunately, the principle effect of Ultracet, at least for me, is to make me dizzy, sleepy, and pee a lot, so basically I slept off the pain; initially, I was sleeping about 16 hours a day modulo pee breaks.

In about 2-3 days, I was slightly more functional. I was able to at least move to my desk and work for a couple hours a day, and during those hours of consciousness I challenged myself to go as long as I could without taking another dose of Ultracet. This went on for about two weeks, gradually extending my waking hours and taking Ultracet only at night to aid sleep, until I could sleep at night without the assistance of the opioids, at which point I made the pills inconvenient to access, but still available should the pain flare up. One of the most unexpected things I learned in this process is how tiring managing chronic pain can be. Although I had no reason to be so tired – I was getting plenty of sleep, and doing minimal physical activity (maybe just 15-30 minutes of a seated cardio workout every day), I would be exhausted because ignoring chronic pain takes mental effort. It’s a bit like how anyone can lift a couple pounds easily, but if you had to hold up a two-pound weight for hours on end, your arm would get tired after a while.

Finally, after bit over forty years, I now understand why some women on their period take naps. A period is something completely outside of my personal physical experience, yet every partner I’ve loved has had to struggle with it once a month. I’d sometimes ask them to try and explain to me the sensation, so I could develop more empathy toward their experience and thereby be more supportive. However, none of them told me was how exhausting it is to cope with chronic pain, even with the support of mild painkillers. I knew they would sometimes become tired and need a nap, but I had always assumed it was more a metabolic phenomenon related to the energetic expense of supporting the flow of menses. But even without a flow of blood from my knee, just coping with a modest amount of continuous pain for hours a day is simply exhausting. It’s something as a male I couldn’t appreciate until I had gone through this healing process, and I’m thankful now that I have a more intuitive understanding of what roughly half of humanity experiences once a month.

Another thing I learned was that the healing process is fairly indiscriminate. Basically, in response to the trauma, a number of growth and healing factors were recruited to the right knee. This caused everything in the region to grow (including the toe nails and skin around my foot and ankle) and scar over, not just the spots that were broken. My tendon, instead of being a separate tissue that could move freely, had bonded to the tissue around it, meaning immediately after my bone had healed, I couldn’t flex my knee at all. It took months of physiotherapy, massaging, and stretching to break up the tissue to the point where I could move my knee again, and then months more to try and align the new tissue into a functional state. As it was explained to me, I had basically a ball of randomly oriented tissue in the scarring zone, but for the tendons to be strong and flexible, the tissue needs to be stretched and stressed so that its constituent cells can gain the correct orientation.

Which lead to another interesting problem – I now have a knee that is materially different in construction to the knee I had before. Forty plus years of instinct and intuition has to be trained out of me, and on top of that, weeks of a strong mental association of excruciating pain with the activation of certain muscle groups. It makes sense that the body would have an instinct to avoid doing things that cause pain. However, in this case, that response lead to an imbalance in the development of my muscles during recovery. The quads is not just one muscle, it’s four muscles – hence the “quad” in “quadriceps” – and my inner quad felt disproportionately more pain than the outer quad. So during recovery, my outer quad developed very quickly, as my brain had automatically biased my walking gait to rely upon the outer quad. Unfortunately, this leads to a situation where the kneecap is no longer gliding smoothly over the middle groove of the knee; with every step, the kneecap is grinding into the cartilage underneath it, slowly wearing it away. Although it was painless, I could feel a grinding, sometimes snapping sensation in the knee, so I asked my physiotherapist about it. Fortunately, my physiotherapist was able to diagnose the problem and recommend a set of massages and exercises that would first tire out the outer quad and then strengthen the inner quad. After about a month of daily effort I was able to develop the inner quad and my kneecap came back into alignment, moving smoothly with every step.

Fine-tuning the physical imbalances of my body is clockwork compared to the process of overcoming my mental issues. The memory of the trauma plus now incorrect reflexes makes it difficult for me to do some everyday tasks, such as going down stairs and jogging. I no longer have an intuitive sense of where my leg is positioned – lay me on my belly and ask me to move both legs to forty-five degrees, my left leg will go to exactly the right location, and my right leg will be off by a few degrees. Ask me to balance on my right leg, and I’m likely to teeter and fall. Ask me to hop on one foot, and I’m unable to control my landing despite having the strength to execute the hop.

The most frustrating part about this is that continuous exercise doesn’t lead to lasting improvement. The typical pattern is on my first exercise, I’m unstable or weak, but as my brain analyzes the situation it can actively compensate so that by my second or third exercise in a series, I’m appearing functional and balanced. However, once I’m no longer actively focusing to correct for my imbalances, the weaknesses come right back. This mental relapse can happen in a matter of minutes. Thus, many of my colleagues have asked if I’m doing alright when they see me first going down a flight of stairs – the first few steps I’m hobbling as my reflexes take me through the wrong motions, but by the time I reach the bottom I’m looking normal as my brain has finally compensated for the new offsets in my knee.

It’s unclear how long it will be until I’m able to re-train my brain and overcome the mental issues associated with a major injury. I still feel a mild sense of panic when I’m confronted with a wet floor, and it’s a daily struggle to stretch, strengthen, and balance my recovering leg. However, I’m very grateful for the love and support of my partner who has literally been there ever step of the way with me; from holding my hand while I laid on the floor in pain, to staying overnight in the hospital, to weekly physiotherapy sessions, to nightly exercises, she’s been by my side to help me, to encourage me, and to discipline me. Her effort has paid off – to date my body has exceeded the expectations of both the surgeon and the physiotherapist. However, the final boss level is in between my ears, in a space where she can’t be my protector and champion. Over the coming months and years it’ll be up to me to grow past my memories of pain, overcome my mental issues and hopefully regain a more natural range of behaviors.

Although profound pain only comes through tragic experiences, it’s helped me understand myself and other humans in ways I previously could not have imagined. While I don’t wish such experiences on anyone, if you find yourself in an unfortunate situation, my main advice is to pay attention and learn as much as you can from it. Empathy is built on understanding, and by chronicling my experiences coping with pain, it helps with my healing while hopefully promoting greater empathy by enabling others to gain insight into what profound pain is like, without having to go through it themselves.


My right knee, 7-months post-op. Right thigh is much smaller than the left. Still a long way to go…

by bunnie at December 06, 2018 02:01 PM

November 30, 2018

Bunnie Studios

You Can’t Opt Out of the Patent System. That’s Why Patent Pandas Was Created!

A prevailing notion among open source developers is that “patents are bad for open source”, which means they can be safely ignored by everyone without consequence. Unfortunately, there is no way to opt-out of patents. Even if an entire community has agreed to share ideas and not patent them, there is nothing in practice that stops a troll from outside the community cherry-picking ideas and attempting to patent them. It turns out that patent examiners spend about 12 hours on average to review a patent, which is only enough time to search the existing patent database for prior art. That’s right — they don’t check github, academic journals, or even do a simple Google search for key words.

Once a patent has been granted, even with extensive evidence of prior art, it is an expensive process to challenge it. The asymmetry of the cost to file a patent — around $300 — versus the cost to challenge an improperly granted patent — around $15,000-$20,000 — creates an opportunity for trolls to patent-spam innovative open source ideas, and even if only a fraction of the patent-spam is granted, it’s still profitable to shake down communities for multiple individual settlements that are each somewhat less than the cost to challenge the patent.

Even though in practice open source developers are “in the right” that the publication and sharing of ideas creates prior art, in practice the fact that the community routinely shuns patents means our increasingly valuable ideas are only becoming more vulnerable to trolling. Many efforts have been launched to create prior art archives, but unfortunately, examiners are not required to search them, so in practice these archives offer little to no protection against patent spamming.

The co-founder of Chibitronics, Jie Qi, was a victim of not one but two instances of patent-spam on her circuit sticker invention. In one case, a crowdfunding backer patented her idea, and in another, a large company (Google) attempted to patent her idea after encountering it in a job interview. In response to this, Jie spent a couple years studying patent law and working with law clinics to understand her rights. She’s started a website, Patent Pandas, to share her findings and create a resource for other small-time and open source innovators who are in similar dilemmas.

As Jie’s experience demonstrates, you can’t opt-out of patents. Simply being open is unfortunately not good enough to prevent trolls from patent-spamming your inventions, and copyright licenses like BSD are well, copyright licenses, so they aren’t much help when it comes to patents: copyrights protect the expression of ideas, not the ideas themselves. Only patents can protect functional concepts.

Learn more about patents, your rights, and what you can do about them in a friendly, approachable manner by visiting Patent Pandas!

by bunnie at November 30, 2018 04:53 AM

Name that Ware November 2018

The Ware for November 2018 is shown below.

Thanks to phantom deadline for sharing this ware! I enjoyed reading up about it.

by bunnie at November 30, 2018 01:54 AM

Winner, Name that Ware October 2018

The Ware for October 2018 is an RFID transponder; this particular model was originally used in the early 2000’s in Colorado. Congrats to Barry Callahan for guessing it, email me for your prize!

by bunnie at November 30, 2018 01:54 AM

October 30, 2018

Bunnie Studios

Name That Ware, October 2018

The Ware for October 2018 is shown below.

Thanks to Michael Dwyer for submitting this ware!

by bunnie at October 30, 2018 06:10 PM

Winner, Name that Ware September 2018

The Ware for September 2018 is a 24GHz microwave radar module (CFK401A1T1R-V2). Congrats to phantom deadline for nailing it, email me for your prize! Snapped a photo of this one nestled inside of those road-side “Your Current Speed Is” signs, thought it was pretty cool looking. The high-performance RF PCB dielectrics always catch my eye with their ivory-white color.

by bunnie at October 30, 2018 06:10 PM

September 29, 2018

Bunnie Studios

Name that Ware, September 2018

The Ware for September 2018 is shown below.

Been a busy month banging my head against the wall of getting FCC/CE certification for NeTV2, and spending thousands of dollars on dozens of tests — more time, effort, and treasure than developing the product itself. This is my least favorite aspect of product development — the regulatory burdens are just so immense if you actually try to comply with all the rules, especially with such a global marketplace (every region you legally serve multiplies your paperwork load, not to mention different SKUs for power supplies & manual/packaging translations).

Rather ironic to have finally figured out all the technical tricks to make production in small batches efficient, only to find there’s no efficient way to deal with regulatory hurdles. It’s a discouraging message for small-time makers and innovators, and tilts things in the favor of large corporations with the funding and scale to build internal certification teams and facilities to make the regulatory process efficient and predictable.

by bunnie at September 29, 2018 10:14 AM

Winner, Name that Ware August 2018

The Ware for August 2018 is Micom router, with an EasyRouter HCF feature pack installed. It’s interesting how the ROM cartridge drew many people to the conclusion this was some sort of an old laser printer motherboard thanks to the ancient practice of purchasing fonts as physical ROM cartridges — I would have thought the same initially, except most laser printers also have a pretty substantial general purpose CPU built into them, and there isn’t enough RAM on board either for that type of application. SAM nailed it — I only slightly blurred some of the numbers to try and give something to search for, as Micom is a pretty obscure company and almost everything about this piece of hardware pre-dates modern search engines. Congrats, email me for your prize!

by bunnie at September 29, 2018 10:08 AM

September 28, 2018

Harald Welte

Fernvale Kits - Lack of Interest - Discount

Back in December 2014 at 31C3, bunnie and xobs presented about their exciting Fernvale project, how they reverse engineered parts of the MT6260 ARM SoC, which also happens to contain a Mediatek GSM baseband.

Thousands (at least hundreds) of people have seen that talk live. To date, 2506 people (or AIs?) have watched the recordings on youtube, 4859 more people on media.ccc.de.

Given that Fernvale was the closest you could get to having a hackable baseband processor / phone chip, I expected at least as much interest into this project as we received four years earlier with OsmocomBB.

As a result, in early 2015, sysmocom decided to order 50 units of Fernvale DVT2 evaluation kits from bunnie, and to offer them in the sysmocom webshop to ensure the wider community would be able to get the boards they need for research into widely available, inexpensive 2G baseband chips.

This decision was made purely for the perceived benefit of the community: Make an exciting project available for anyone. With that kind of complexity and component density, it's unlikely anyone would ever solder a board themselves. So somebody has to build some and make it available. The mark-up sysmocom put on top of bunnie's manufacturing cost was super minimal, only covering customs/import/shipping fees to Germany, as well as minimal overhead for packing/picking and accounting.

Now it's almost four years after bunnie + xobs' presentation, and of those 50 Fernvale boards, we still have 34 (!) units in stock. That means, only 16 people on this planet ever had an interest in playing with what at the time I thought was one of the most exciting pieces of equipment to play with.

So we lost somewhere on the order of close to 3600 EUR in dead inventory, for something that never was supposed to be a business anyway. That sucks, but I still think it was worth it.

In order to minimize the losses, sysmocom has now discounted the boards and reduced the price from EUR 110 to to EUR 58.82 (excluding VAT). I have very limited hope that this will increase the amount of interest in this project, but well, you got to try :)

In case you're thinking "oh, let's wait some more time, until they hand them out for free", let me tell you: If money is the issue that prevents you from playing with a Fernvale, then please contact me with the details about what you'd want to do with it, and we can see about providing them for free or at substantially reduced cost.

In the worst case, it was ~ 3600 EUR we could have invested in implementing more Osmocom software, which is sad. But would I do it again if I saw a very exciting project? Definitely!

The lesson learned here is probably that even a technically very exciting project backed by world-renowned hackers like bunnie doesn't mean that anyone will actually ever do anything with it, unless they get everything handed on a silver plate, i.e. all the software/reversing work is already done for them by others. And that actually makes me much more sad than the loss of those ~ 3600 EUR in sysmocom's balance sheet.

I also feel even more sorry for bunnie + xobs. They've invested time, money and passion into a project that nobody really seemed to want to get involved and/or take further. ("nobody" is meant figuratively. I know there were/are some enthusiasts who did pick up. I'm talking about the big picture). My condolences to bunnie + xobs!

by Harald Welte at September 28, 2018 10:00 PM

September 24, 2018

Michele's GNSS blog

Learning a bit about Beidou3 signals

I have pretty busy these last couple of years contributing to features and improvements to the Piksi Multi measurement engine, but during the Summer I managed to contribute to a blog post on the Swift Navigation page together with two colleagues of mine who I thank sincerely: Michael Wurm (lead FPGA engineer) and Keerthan Jaic (python and many other languages expert).
We studied a bit the signals of Beidou3, which is growing at incredible pace. For the interested readers, here is the link.

A screenshot of the Swift Console says more than 1000 words:



There should be data and code you all can play with.And questions welcome, hope you enjoy the read.

P.S.: I won't be at the ION this year (too busy with other things) but I will likely be at the Stanford PNT meeting in November.

by noreply@blogger.com (Michele Bavaro) at September 24, 2018 12:39 AM

September 18, 2018

Harald Welte

Wireshark dissector for 3GPP CBSP - traces wanted!

I recently was reading 3GPP TS 48.049, the specification for the CBSP (Cell Broadcast Service Protocol), which is the protocol between the BSC (Base Station Controller) and the CBC (Cell Broadcast Centre). It is how the CBC according to spec is instructing the BSCs to broadcast the various cell broadcast messages to their respective geographic scope.

While OsmoBTS and OsmoBSC do have support for SMSCB on the CBCH, there is no real interface in OsmoBSC yet on how any external application would instruct it tot send cell broadcasts. The only existing interface is a VTY command, which is nice for testing and development, but hardly a scalable solution.

So I was reading up on the specs, discovered CBSP and thought one good way to get familiar with it is to write a wireshark dissector for it. You can find the result at https://code.wireshark.org/review/#/c/29745/

Now my main problem is that as usual there appear to be no open source implementations of this protocol, so I cannot generate any traces myself. More surprising is that it's not even possible to find any real-world CBSP traces out there. So I'm facing a chicken-and-egg problem. I can only test / verify my wireshark dissector if I find some traces.

So if you happen to have done any work on cell broadcast in 2G network and have a CBSP trace around (or can generate one): Please send it to me, thanks!

Alternatively, you can of course also use the patch linked above, build your own wireshark from scratch, test it and provide feedback. Thanks in either case!

by Harald Welte at September 18, 2018 10:00 PM

September 09, 2018

Dieter Spaar

A Pico Base Station from Ericsson

The RBS6401 is a small indoor base station from Ericsson for WCDMA or LTE. The device is about two times the size of an ip.access nanoBTS. It is based on a KeyStone I SoC from TI and runs Linux (in fact there are two KeyStone I SoCs inside, but it seems that only one of them is used, at least for WCDMA).

Compared to the other commercial base stations I have seen so far the RBS6401 makes it quite hard to get access to the operating system. It tries to setup a VPN to a Security Gateway for Autointegration into the operator's network and there is only a simple Web Interface to set the network parameters of the device.

Unfortunately I have only found the WCDMA software on the three devices I have access to. It would really be nice to use the RBS6401 with LTE, WCDMA is not that interesting (I am not aware of any Open Source RNC). If anyone has the LTE software for the RBS6401, please let me know.

September 09, 2018 02:00 AM

August 31, 2018

Video Circuits

VECTOR HACK 2018


I haven't updated in a while (there is still a write up of my amazing signal culture trip last year coming)

But this is too good not to leave a short post about. I am helping Ivan and Derek with their amazing Vector Hack Festival project. Vector Hack is a festival centred around experimental vector graphics using oscilloscopes and lasers. It's happening in over two sites starting in Zagreb on 01/10/18 and ending in Ljubljana on 07/10/18. I will be attending so if you decide to come see you there!

Ivan and Derek have worked tirelessly to make this happen and it's going to be amazing.

vectorhackfestival.com
instagram.com/vectorhackfestival
facebook.com/vectorhackfestival


Vector Hack festival 2018. trailer I from i.m. klif on Vimeo.

by Chris (noreply@blogger.com) at August 31, 2018 03:49 AM

August 26, 2018

Harald Welte

Still alive, just not blogging

It's been months without any update to this blog, and I feel sad about that. Nothing particular has happened to me, everything is proceeding as usual.

At the Osmocom project we've been making great progress on a variety of fronts, including

  • 3GPP LCLS (Local Call, Local Switch)
  • Inter-BSC hand-over in osmo-bsc
  • load Based hand-over in osmo-bsc
  • reintroducing SCCPlite compatibility to the new BSC code in osmo-bsc / libosmo-sigtran
  • finishing the first release of the SIMtrace2 firmware
  • extending test coverage on all fronts, particularly in our TTCN-3 test suites
  • tons of fixes to the osmo-bts measurement processing / reporting
  • higher precision time of arrival reporting in osmo-bts
  • migrating osmocom.org services to new, faster servers

At sysmocom, next to the Osmocom topics above, we've

  • made the sysmoQMOD remote SIM firmware much more robust and reliable
  • after months of delays, finally SIMtrace2 hardware kits are available again
  • created autoamtic testing of pySim-prog and sysmo-usim-util
  • extended our osmo-gsm-tester based automatic testing setup to include multi-TRX nanoBTS setups

In terms of other topic,

  • my wife and I have been to a three week motorbike tour all over the Alps in July
  • I've done tons of servicing (brake piston fittings, brake tubes, fuel line, fixing rust/paint, replacing clutch cable, choke cable, transmission chain, replacing several rusted/worn-out needle bearings, and much more) on my 22year old BMW F650ST to prepare it for many more yers to come. As some type-specific spare parts (mostly plastic parts) are becoming rarer, it was best to take care of replacements sooner than later
  • some servicing/repairs to my 19 year old Audi A4 car (which passed German mandatory inspection without any deficiency at the first attempt!)
  • some servicing of my Yamaha FZ6
  • repaired my Fairphone 2 by swapping the microphone module (mike was mute)
  • I've re-vamped a lot of the physical/hardware infrastructure for gnumonks.org and other sites I run, which was triggered by having to move racks

by Harald Welte at August 26, 2018 10:00 PM

August 14, 2018

Bunnie Studios

Name that Ware, August 2018

The Ware for August 2018 is shown below.

Thanks to Patricio Worthalter for contributing this ware!

by bunnie at August 14, 2018 10:02 AM

Winner, Name that Ware July 2018

The Ware for July 2018 is an I/O board for the x86-based Sega Nu arcade platform. Congrats to megabytephreak for nailing the ware, email me for your prize! Also props to Vegard for pointing out the JVS connector (the thing that looks like USB but isn’t).

I was curious how certain readers were able to identify this as an arcade-related ware, and the answer I received is that the I/O breakout board is the key — the DIP switches and push-buttons are typical of how arcade machines are configured and serviced, and the JVS connector, if you can recognize it, is a dead give-away. The coin cell for retaining configs & high scores even if the power is pulled is also a potential tell. You learn something new every day!

by bunnie at August 14, 2018 10:02 AM

July 18, 2018

Bunnie Studios

Tariffs in a Nutshell

I was asked to distill a previous post about tariffs into something more accessible to the general public. The resulting article ended up being run on CNN Digital as an opinion piece:

In retaliation for unfair trade practices and the theft of American innovations and ideas, the US Trade Representative’s office is imposing a 25% tariff on a broad range of goods imported from China.

But these tariffs won’t help American workers. Instead, they will encourage American companies to push ideas and production overseas by raising the cost of raw materials without penalizing the import of finished goods.
[…]
Imagine a bakery located in the US. It uses imported flour, sugar and cacao to make delectable cakes based on a closely-guarded secret family recipe handed down for generations, and it employs dozens of bakers to do this. Now suppose a bakery in China has tried to copy the recipe…

The article uses a bakery as an analogy to explain the trade war situation, as well as thinking through why trade deficits are OK through the notion that buying a T-shirt at a store creates a “trade deficit” between you and the store, but in the end that trade deficit is actually quite helpful to you. You can read the full article on CNN Digital.

I had also prepared a short infographic to accompany the article, which wasn’t picked up by CNN, but you can enjoy it here.

by bunnie at July 18, 2018 11:44 AM

Name that Ware July 2018

The Ware for July 2018 is shown below.

Thanks to the little birdie that dropped this ware in my inbox. :) Really nice photography work.

by bunnie at July 18, 2018 06:45 AM

Winner, Name that Ware June 2018

The ware for June 2018 is a TI TALP1000B. Marcan nailed it, congrats! email me for your prize.

Here’s some more photos of the ware, in case you want more context:

It’s like a science project that became a product.

by bunnie at July 18, 2018 06:44 AM

June 19, 2018

Bunnie Studios

New US Tariffs are Anti-Maker and Will Encourage Offshoring

The new 25% tariffs announced by the USTR, set to go into effect on July 6th, are decidedly anti-Maker and ironically pro-offshoring. I’ve examined the tariff lists (List 1 and List 2), and it taxes the import of basic components, tools and sub-assemblies, while giving fully assembled goods a free pass. The USTR’s press release is careful to mention that the tariffs “do not include goods commonly purchased by American consumers such as cellular telephones or televisions.”

Think about it – big companies with the resources to organize thousands of overseas workers making TVs and cell phones will have their outsourced supply chains protected, but small companies that still assemble valuable goods from basic parts inside the US are about to see significant cost increases. Worse yet educators, already forced to work with a shoe-string budget, are going to return from their summer recess to find that basic parts, tools and components for use in the classroom are now significantly more expensive.


Above: The Adafruit MetroX Classic Kit is representative of a typical electronics education kit. Items marked with an “X” in the above image are potentially impacted by the new USTR tariffs.

New Tariffs Reward Offshoring, Encourage IP Flight

Some of the most compelling jobs to bring back to the US are the so-called “last screw” system integration operations. These often involve the complex and precise process of integrating simple sub-assemblies into high-value goods such as 3D printers or cell phones. Quality control and IP protection are paramount. I often advise startups to consider putting their system integration operations in the US because difficult-to-protect intellectual property, such as firmware, never has to be exported if the firmware upload operation happens in the US. The ability to leverage China for low-value subassemblies opens more headroom to create high-value jobs in the US, improving the overall competitiveness of American companies.

Unfortunately, the structure of the new tariffs are exactly the opposite of what you would expect to bring those jobs back to the US. Stiff new taxes on simple components, sub-assemblies, and tools like soldering irons contrasted against a lack of taxation on finished goods pushes business owners to send these “last screw” operation overseas. Basically, with these new tariffs the more value-add sent outside the borders of the US, the more profitable a business will be. Not even concerns over IP security could overcome a 25% increase in base costs and keep operations in the US.

It seems the intention of the new tariff structure was to minimize the immediate pain that voters would feel in the upcoming mid-terms by waiving taxes on finished goods. Unfortunately, the reality is it gives small businesses that were once considering setting up shop in the US a solid reason to look off-shore, while rewarding large corporations for heavy investments in overseas operations.

New Tariffs Hurt Educators and Makers

Learning how to blink a light is the de-facto introduction to electronics. This project is often done with the help of a circuit board, such as a Microbit or Chibi Chip, and a type of light known as an LED. Unfortunately, both of those items – simple circuit boards and LEDs – are about to get 25% more expensive with the new tariffs, along with other Maker and educator staples such as capacitors, resistors, soldering irons, and oscilloscopes. The impact of this cost hike will be felt throughout the industry, but most sharply by educators, especially those serving under-funded school districts.


Above: Learning to blink a light is the de-facto introduction to electronics, and it typically involves a circuit board and an LED, like those pictured above.

Somewhere on the Pacific Ocean right now floats a container of goods for ed-tech startup Chibitronics. The goods are slated primarily for educators and Makers that are stocking up for the fall semester. It will arrive in the US the second week of July, and will likely be greeted by a heavy import tax. I know this because I’m directly involved in the startup’s operations. Chibitronics’ core mission is to serve the educator market, and as part of that we routinely offered deep discounts on bulk products for educators and school systems. Now, thanks to the new tariffs on the basic components that educators rely upon to teach electronics, we are less able to fulfill our mission.

A 25% jump in base costs forces us to choose between immediate price increases or cutting the salaries of our American employees who support the educators. These new tariffs are a tax on America’s future – it deprives some of the most vulnerable groups of access to technology education, making future American workers less competitive on the global stage.


Above: Educator-oriented learning kits like the Chibitronics “Love to Code” are slated for price increases this fall due to the new tariffs.

Protectionism is Bad for Technological Leadership

Recently, I was sent photos by Hernandi Krammes of a network card that was manufactured in Brazil around 1992. One of the most striking features of the card was how retro it looked – straight out of the 80’s, a full decade behind its time. This is a result of Brazil’s policy of protectionist tariffs on the import of high-tech components. While stiff tariffs on the import of microchips drove investment in local chip companies, trade barriers meant the local companies didn’t have to be as competitive. With less incentive to re-invest or upgrade, local technology fell behind the curve, leading ultimately to anachronisms like the Brazilian Ethernet card pictured below.


Above: this Brazilian network card from 1992 features design techniques from the early 80’s. It is large and clunky compared to contemporaneous cards.

Significantly, it’s not that the Brazilian engineers were any less clever than their Western counterparts: they displayed considerable ingenuity getting a network card to work at all using primarily domestically-produced components. The tragedy is instead of using their brainpower to create industry-leading technology, most of their effort went into playing catch-up with the rest of the world. By the time protectionist policies were repealed in Brazil, the local industry was too far behind to effectively compete on a global scale.

Should the US follow Brazil’s protectionist stance on trade, it’s conceivable that some day I might be remarking on the quaintness of American network cards compared to their more advanced Chinese or European counterparts. Trade barriers don’t make a country more competitive – in fact, quite the opposite. In a competition of ideas, you want to start with the best tech available anywhere; otherwise, you’re still jogging to the starting line while the competition has already finished their first lap.

Stand Up and Be Heard

There is a sliver of good news in all of this for American Makers. The list of commodities targeted in the trade war is not yet complete. The “List 2” items – which include all manner of microchips, motors, and plastics (such as 3D printer PLA filament and acrylic sheets for laser cutting) that are building blocks for small businesses and Makers – have yet to be ratified. The USTR website has indicated in the coming weeks they will disclose a process for public review and comment. Once this process is made transparent – whether you are a small business owner or the parent of a child with technical aspirations – I encourage you to please share your stories and concerns on how you will be negatively impacted by these additional tariffs.

Some of the List 2 items still under review include:

9030.31.00 Multimeters for measuring or checking electrical voltage, current, resistance or power, without a recording device
8541.10.00 Diodes, other than photosensitive or light-emitting diodes
8541.40.60 Diodes for semiconductor devices, other than light-emitting diodes, nesoi
8542.31.00 Electronic integrated circuits: processors and controllers
8542.32.00 Electronic integrated circuits: memories
8542.33.00 Electronic integrated circuits: amplifiers
8542.39.00 Electronic integrated circuits: other
8542.90.00 Parts of electronic integrated circuits and microassemblies
8501.10.20 Electric motors of an output of under 18.65 W, synchronous, valued not over $4 each
8501.10.60 Electric motors of an output of 18.65 W or more but not exceeding 37.5 W
8501.31.40 DC motors, nesoi, of an output exceeding 74.6 W but not exceeding 735 W
8544.49.10 Insulated electric conductors of a kind used for telecommunications, for a voltage not exceeding 80 V, not fitted with connectors
8544.49.20 Insulated electric conductors nesoi, for a voltage not exceeding 80 V, not fitted with connectors
3920.59.80 Plates, sheets, film, etc, noncellular, not reinforced, laminated, combined, of other acrylic polymers, nesoi
3916.90.30 Monafilament nesoi, of plastics, excluding ethylene, vinyl chloride and acrylic polymers

Here’s some of the “List 1” items that are set to become 25% more expensive to import from China, come July 6th:

Staples used by every Maker or electronics educator:

8515.11.00 Electric soldering irons and guns
8506.50.00 Lithium primary cells and primary batteries
8506.60.00 Air-zinc primary cells and primary batteries
9030.20.05 Oscilloscopes and oscillographs, specially designed for telecommunications
9030.33.34 Resistance measuring instruments
9030.33.38 Other instruments and apparatus, nesoi, for measuring or checking electrical voltage, current, resistance or power, without a recording device
9030.39.01 Instruments and apparatus, nesoi, for measuring or checking

Circuit assemblies (like Microbit, Chibi Chip, Arduino):

8543.90.68 Printed circuit assemblies of electrical machines and apparatus, having individual functions, nesoi
9030.90.68 Printed circuit assemblies, NESOI

Basic electronic components:

8532.21.00 Tantalum fixed capacitors
8532.22.00 Aluminum electrolytic fixed capacitors
8532.23.00 Ceramic dielectric fixed capacitors, single layer
8532.24.00 Ceramic dielectric fixed capacitors, multilayer
8532.25.00 Dielectric fixed capacitors of paper or plastics
8532.29.00 Fixed electrical capacitors, nesoi
8532.30.00 Variable or adjustable (pre-set) electrical capacitors
8532.90.00 Parts of electrical capacitors, fixed, variable or adjustable (pre-set)
8533.10.00 Electrical fixed carbon resistors, composition or film types
8533.21.00 Electrical fixed resistors, other than composition or film type carbon resistors, for a power handling capacity not exceeding 20 W
8533.29.00 Electrical fixed resistors, other than composition or film type carbon resistors, for a power handling capacity exceeding 20 W
8533.31.00 Electrical wirewound variable resistors, including rheostats and potentiometers, for a power handling capacity not exceeding 20 W
8533.40.40 Metal oxide resistors
8533.40.80 Electrical variable resistors, other than wirewound, including rheostats and potentiometers
8533.90.80 Other parts of electrical resistors, including rheostats and potentiometers, nesoi
8541.21.00 Transistors, other than photosensitive transistors, with a dissipation rating of less than 1 W
8541.29.00 Transistors, other than photosensitive transistors, with a dissipation rating of 1 W or more
8541.30.00 Thyristors, diacs and triacs, other than photosensitive devices
8541.40.20 Light-emitting diodes (LED’s)
8541.40.70 Photosensitive transistors
8541.40.80 Photosensitive semiconductor devices nesoi, optical coupled isolators
8541.40.95 Photosensitive semiconductor devices nesoi, other
8541.50.00 Semiconductor devices other than photosensitive semiconductor devices, nesoi
8541.60.00 Mounted piezoelectric crystals
8541.90.00 Parts of diodes, transistors, similar semiconductor devices, photosensitive semiconductor devices, LED’s and mounted piezoelectric crystals
8504.90.75 Printed circuit assemblies of electrical transformers, static converters and inductors, nesoi
8504.90.96 Parts (other than printed circuit assemblies) of electrical transformers, static converters and inductors
8536.50.90 Switches nesoi, for switching or making connections to or in electrical circuits, for a voltage not exceeding 1,000 V
8536.69.40 Connectors: coaxial, cylindrical multicontact, rack and panel, printed circuit, ribbon or flat cable, for a voltage not exceeding 1,000 V
8544.49.30 Insulated electric conductors nesoi, of copper, for a voltage not exceeding 1,000 V, not fitted with connectors
8544.49.90 Insulated electric conductors nesoi, not of copper, for a voltage not exceeding 1,000 V, not fitted with connectors
8544.60.20 Insulated electric conductors nesoi, for a voltage exceeding 1,000 V, fitted with connectors
8544.60.40 Insulated electric conductors nesoi, of copper, for a voltage exceeding 1,000 V, not fitted with connectors

Parts to fix your phone if it breaks:

8537.10.80 Touch screens without display capabilities for incorporation in apparatus having a display
9033.00.30 Touch screens without display capabilities for incorporation in apparatus having a display
9013.80.70 Liquid crystal and other optical flat panel displays other than for articles of heading 8528, nesoi
9033.00.20 LEDs for backlighting of LCDs
8504.90.65 Printed circuit assemblies of the goods of subheading 8504.40 or 8504.50 for telecommunication apparatus

Power supplies:

9032.89.60 Automatic regulating or controlling instruments and apparatus, nesoi
9032.90.21 Parts and accessories of automatic voltage and voltage-current regulators designed for use in a 6, 12, or 24 V system, nesoi
9032.90.41 Parts and accessories of automatic voltage and voltage-current regulators, not designed for use in a 6, 12, or 24 V system, nesoi
9032.90.61 Parts and accessories for automatic regulating or controlling instruments and apparatus, nesoi
8504.90.41 Parts of power supplies (other than printed circuit assemblies) for automatic data processing machines or units thereof of heading 8471

by bunnie at June 19, 2018 04:13 AM

June 08, 2018

Harald Welte

Re-launching openmoko USB Product ID and Ethernet OUI registry

Some time after Openmoko went out of business, they made available their USB Vendor IDs and IEEE OUI (Ethernet MAC address prefix) available to Open Source Hardware / FOSS projects.

After maintaining that for some years myself, I was unable to find time to continue the work and I had handed it over some time ago to two volunteers. However, as things go, those volunteers also stopped to respond to PID / OUI requests, and we're now launching the third attempt of continuing this service.

As the openmoko.org wiki will soon be moved into an archive of static web pages only, we're also moving the list of allocated PID and OUIs into a git repository.

Since git.openmoko.org is also about to be decommissioned, the repository is now at https://github.com/openmoko/openmoko-usb-oui, next to all the archived openmoko.org repository mirrors.

This also means that in addition to sending an e-mail application for getting an allocation in those ranges, you can now send a pull-request via github.

Thanks to cuvoodoo for volunteering to maintain the Openmoko USB PID and IEEE OUI allocations from now on!

by Harald Welte at June 08, 2018 10:00 PM

June 05, 2018

Bunnie Studios

Developing Apps for Your TV the Easy and Open Way

The biggest screen in your house would seem a logical place to integrate cloud apps, but TVs are walled gardens. While it’s easy enough to hook up a laptop or PC and pop open a browser, there’s no simple, open framework for integrating all that wonderful data over
the TV’s other inputs.

Until now. Out of the box, NeTV2’s “NeTV Classic Mode” makes short work of overlaying graphics on top of any video feed. And thanks to the Raspberry Pi bundled in the Quickstart version, NeTV2 app developers get to choose from a diverse and well-supported ecosystem of app frameworks to install over the base Raspbian image shipped with every device.

For example, Alasdair Allan’s article on using the Raspberry Pi with Magic Mirror and Google AIY contains everything you need to get started on turning your TV into a voice-activated personal assistant. I gave it a whirl, and in just one evening I was able to concoct the demo featured in the video below.



Magic Mirror is a great match for NeTV2, because all the widgets are formatted to run on a black background. Once loaded, I just had to set the NeTV2’s chroma key color to black and the compositing works perfectly. Also, Google AIY’s Voicekit framework “just worked” out of the box. The only fussy bit was configuring it to work with my USB microphone, but thankfully there’s a nice Hackaday article detailing exactly how to do that.

Personally, I find listening to long-form replies from digital assistants like Alexa or Google Home a bit time consuming. As you can see from this demo, NeTV2 lets you build a digital assistant that pops up data and notifications over the biggest screen in your house using rich visual formats. And the best part is, when you want privacy, you can just unplug the microphone.

If you can develop an app that runs on a Raspberry Pi, you already know everything you need to integrate apps into any TV. Thanks to NeTV2, there’s never been an easier or more open way to make the biggest screen in your house the smartest screen.

The NeTV2 is crowdfunding now at CrowdSupply.com, and we’re just shy of our first stretch goal: a free Tomu bundled with every board. Normally priced at $30, Tomu is a tiny open-source computer that fits in a USB Type-A port, and it’s the easiest way to add an extra pair of status LEDs to an NeTV2. Help unlock this deal by backing now and spreading the word!

by bunnie at June 05, 2018 04:01 PM

June 03, 2018

Bunnie Studios

Name that Ware, June 2018

The Ware for June 2018 is shown below.

This month we’ll start with the very zoomed in and slightly torn down view of the ware, and if nobody’s nailed it I’ll release some more contextual images throughout the month.

Special thanks to Nava for giving me this ware!

Update June 19, 2018: Looks like people are stumped by this one, so providing a little more context to see if anyone gets it…

by bunnie at June 03, 2018 10:14 AM

Winner, Name that Ware May 2018

The NeTV2-MVP DVT1E rev shown in the May 2018 ware has 933 vias and 63 holes for a total of 996 drill hits. The closest guess for total number of drill hits was Jonathan, at 69 holes and 888 vias = 957 drill hits (39 shy of the total). Mangel was in a close second place with 84 hols and 962 vias for a total of 1046 drill hits (50 over the total). So, Jonathan’s the winner! Congrats, please email me to claim your board!

by bunnie at June 03, 2018 10:14 AM

May 28, 2018

Dieter Spaar

A personal remark about security research

In the last few years I have done a lot of automotive security research, one of my findings about cars from BMW, which was the result of working on contract for the ADAC, was published in February 2015: the original German version and the translated English version.

A few days ago Tencent Keen Security Lab published their findings about BMW cars in this paper.

While in general I find it great that there is more public research in the area of automotive security, I really don't like it if previous work isn't respected. The report of Tencent Keen Security Lab does not contain any reference to previous work besides their own. I would at least have expected references to previous QNX research (QNX is the OS used in the BMW infotainment ECU) and my work when it comes to the TCB (Telematic Communication Box, the name of the BMW telematic ECU). For example regarding the TCB, the description of the communication protocol with the backend and how encrypted SMS is used, with identical encryption keys in all cars, was already contained in my report. To find out why there are no references to others work I contacted them, but I didn't receive any reply.

Could it be that they really aren't aware of previous work? If you start researching the BMW telematic ECU, you would very soon become aware of NGTP (Next Generation Telematics Protocol), the protocol used to communicate with the BMW backend. Searching on Google for "TCB" and "NGTP" will give you the link to the German article on the first place. OK, it's just the German article and you would have to open it to find the link to the English translation, which requires an additional step.

What if they don't use Google but Baidu? Searching for "TCB" and "NGTP" on Baidu gave this link on the fourth place. I can't read the page, but this seems to be the Chinese translation of the article from Heise, even the text in the diagrams is translated. Interestingly when I tried to reproduce my search from last week on Baidu today, this page no longer appears in the search results, although the page itself still exists.

So for me its looks like this: Tencent Keen Security Lab either simply doesn't give reference to previous work and follows the old Chinese tradition of "Clone and Copy" or they don't have any clue about Information Gathering, which is one the first steps when starting to research a target. In my opinion neither of the two possibilities is what you really want.

May 28, 2018 02:00 AM

May 24, 2018

Harald Welte

openmoko.org archive down due to datacenter issues

Unfortunately, since about 11:30 am CEST on MAy 24, openmoko.org is down due to some power outage related issues at Hetzner, the hosting company at which openmoko.org has been hosting for more than a decade now.

The problem seems to have caused quite a lot of fall-out tom many servers (Hetzner is hosting some 200k machines, not sure how many affected, though), and Hetzner is anything but verbose when it comes to actually explaining what the issue is.

All they have published is https://www.hetzner-status.de/en.html#8842 - which is rather tight lipped about some power grid issues. But then, what do you have UPSs for if not for "a strong voltage reduction in the local power grid"?

The openmoko.org archive machine is running in Hetzner DC10, by the way. This is where they've had the largest number of tickets.

In any case, we'll have to wait for them to resolve their tickets. They appear to be working day and night on that.

I have a number of machines hosted at Hetzner, and I'm actually rather happy that none of the more important systems were affected that long. Some machines simply lost their uplink connectivity for some minutes, while some others were rebooted (power outage). The openmoko.org archive is the only machine that didn't automatically boot after the outage, maybe the power supply needs replacement.

In any case, I hope the service will be back up again soon.

btw: Guess who's been paying for hosting costs ever since Openmoko, Inc. has shut down? Yes, yours truly. It was OK for something like 9 years, but I want to recursively pull the dynamic content through some cache, which can then be made permanent. The resulting static archive can then be moved to some VM somewhere, without requiring a dedicated root server. That should reduce the costs down to almost nothing.

by Harald Welte at May 24, 2018 10:00 PM

OsmoCon 2018 CfP closes on 2018-05-30

One of the difficulties with OsmoCon2017 last year was that almost nobody submitted talks / discussions within the deadline, early enough to allow for proper planning.

This lad to the situation where the sysmocom team had to come up with a schedule/agenda on their own. Later on much after the CfP deadline,people then squeezed in talks, making the overall schedule too full.

It is up to you to avoid this situation again in 2018 at OsmoCon2018 by submitting your talk RIGHT NOW. We will be very strict regarding late submissions. So if you would like to shape the Agenda of OsmoCon 2018, this is your chance. Please use it.

We will have to create a schedule soon, as [almost] nobody will register to a conference unless the schedule is known. If there's not sufficient contribution in terms of CfP response from the wider community, don't complain later that 90% of the talks are from sysmocom team members and only about the Cellular Network Infrastructure topics.

You have been warned. Please make your CfP submission in time at https://pretalx.sysmocom.de/osmocon2018/cfp before the CfP deadline on 2018-05-30 23:59 (Europe/Berlin)

by Harald Welte at May 24, 2018 10:00 PM

May 23, 2018

Harald Welte

Mailing List hosting for FOSS Projects

Recently I've encountered several occasions in which a FOSS project would have been interested in some reliable, independent mailing list hosting for their project communication.

I was surprised how difficult it was to find anyone running such a service.

From the user / FOSS project point of view, the criteria that I would have are:

  • operated by some respected entity that is unlikely to turn hostile, discontinue the service or go out of business altogether
  • free of any type of advertisements (we all know how annoying those are)
  • cares about privacy, i.e. doesn't sell the subscriber lists or non-public archives
  • use FOSS to run the service itself, such as GNU mailman, listserv, ezmlm, ...
  • an easy path to migrate away to another service (or self-hosting) as they grow or their requirements change. A simple mail forward to that new address for the related addresses is typically sufficient for that

If you think mailing lists serve no purpose these days anyways, and everyone is on github: Please have a look at the many thousands of FOSS project mailing lists out there still in use. Not everyone wants to introduce a dependency to the whim of a proprietary software-as-a-service provider.

I never had this problem as I always hosted my own mailman instance on lists.gnumonks.org anyway, and all the entities that I've been involved in (whether non-profit or businesses) had their own mailing list hosts. From franken.de in the 1990ies to netfilter.org, openmoko.org and now osmocom.org, we all pride oursevles in self-hosting.

But then there are plenty of smaller projects that neither have the skills nor the funding available. So they go to yahoo groups or some other service that will then hold them hostage without a way to switch their list archives from private to public, without downloadable archives or forwarding in the case they want to move away :(

Of course the larger FOSS projects also have their own list servers, starting from vger.kernel.org to Linux distributions like Debian GNU/Linux. But what if your FOSS project is not specifically Linux related?

The sort-of obvious candidates that I found all don't really fit:

Now don't get me wrong, I'm of course not expecting that there are commercial entities operating free-of charge list hosting services where you neither pay with money, nor your data, nor by becoming a spam receiver.

But still, in the wider context of the Free Software community, I'm seriously surprised that none of the various non-for-profit / non-commercial foundations or associations are offering a public mailing list hosting service for FOSS projects.

One can of course always ask any from the above list and ask for a mailing list even though it's strictly speaking off-topic to them. But who will do that, if he has to ask uninvited for a favor?

I think there's something missing. I don't have the time to set up a related service, but I would certainly want to contribute in terms of funding in case any existing FOSS related legal entity wanted to expand. If you already have a legal entity, abuse contacts, a team of sysadmins, then it's only half the required effort.

by Harald Welte at May 23, 2018 10:00 PM

May 12, 2018

Bunnie Studios

Innovation Should Be Legal. That’s Why I’m Launching NeTV2.

I’d like to share a project I’m working on that could have an impact on your future freedoms in the digital age. It’s an open video development board I call NeTV2.

The Motivation

It’s related to a lawsuit I’ve filed with the help of the EFF against the US government to reform Section 1201 of the DMCA. Currently, Section 1201 imbues media cartels with nearly unchecked power to prevent us from innovating and expressing ourselves, thus restricting our right to free speech.

Have you ever noticed how smart TVs seem pretty dumb compared to our phones? It’s because Section 1201 enables a small cartel of stakeholders to pick and choose who gets to process video. So, for example, anyone is allowed to write a translation app for their smartphone that does real-time video translation of text. However, it’s potentially unlawful to build a box, even in the privacy of my own home, that implements the same thing over the HDCP-encrypted video feeds that go from my set top box to my TV screen.

This is due to a quirk of the DMCA that makes it unlawful for most citizens to bypass encryption – even for lawful free-speech activities, such as self-expression and innovation. Significantly, since the founding of the United States, it’s been unlawful to make copies of copyrighted work, and I believe the already stiff penalties for violating copyright law offer sufficient protection from piracy and theft.

However, in 1998 a group of lobbyists managed to convince Congress that the digital millennium presented an existential threat to copyright holders, and thus stiffer penalties were needed for the mere act of bypassing encryption, no matter the reason. These penalties are in addition to the existing penalties written into copyright law. By passing this law, Congress effectively turned bypassing encryption into a form of pre-crime, empowering copyright holders to be the sole judge, jury and executioner of what your intentions might have been. Thus, even if you were to bypass encryption solely for lawful purposes, such as processing video to translate text, the copyright holder nonetheless has the power to prosecute you for the “pre-crimes” that could follow from bypassing their encryption scheme. In this way, Section 1201 of the DMCA effectively gives corporations the power to license when and how you express yourself where encryption is involved.

I believe unchecked power to license freedom of expression should not be trusted to corporate interests. Encryption is important for privacy and security, and is winding its way into every corner of our life. It’s fundamentally a good thing, but we need to make sure that corporations can’t abuse Section 1201 to also control every corner of our life. In our digital age, the very canvas upon which we paint our thoughts can be access-controlled with cryptography, and we need the absolute right to paint our thoughts freely and share them broadly if we are to continue to live in a free and just society. Significantly, this does not diminish the power of copyrights one bit – this lawsuit simply aims to limit the expansive “pre-crime” powers granted to license holders, that is all.

Of course, even though the lawsuit is in progress, corporations still have the right to go after developers like you and me for the notional pre-crimes associated with bypassing encryption. However, one significant objection lodged by opponents of our lawsuit is that “no other users have specified how they are adversely affected by HDCP in their ability to make specific noninfringing use of protected content … [bunnie] has failed to demonstrate … how “users ‘are, or are likely to be,’ adversely affected by the prohibition on circumventing HDCP.” This is, of course, a Catch-22, because how can you build a user base to demonstrate the need for freedoms when the mere act of trying to build that user base could be a crime in itself? No investor would touch a product that could be potentially unlawful.

Thankfully, it’s 2018 and we have crowd funding, so I’m launching a crowd funding campaign for the NeTV2, in the hopes of rallying like-minded developers, dreamers, users, and enthusiasts to help build the case that a small but important group of people can and would do more, if only we had the right to do so. As limited by the prevailing law, the NeTV2 can only process unencrypted video and perform encryption-only operations like video overlays through a trick I call “NeTV mode”. However, it’s my hope this is a sufficient platform to stir the imagination of developers and users, so that together we can paint a vibrant picture of what a future looks like should we have the right to express our ideas using otherwise controlled paints on otherwise denied canvases.


Some of the things you might be able to do with the NeTV2, if you only had the right to do it…

The Hardware

The heart of the NeTV2 is an FPGA-based video development board in a PCIe 2.0 x4 card form factor. The board supports up to two video inputs and two video outputs at 1080p60, coupled to a Xilinx XC7A35T FPGA, along with 512 MiB of DDR3 memory humming along at a peak bandwidth of 25.6 Gbps. It also features some nice touches for debugging including a JTAG/UART header made to plug directly into a Raspberry Pi, and a 10/100 Ethernet port wired directly to the FPGA for Etherbone support. For intrepid hackers, the reserved/JTAG pins on the PCI-express header are all wired to the FPGA, and microSD and USB headers are provisioned but not specifically supported in any mode. And of course, the entire PCB design is open source under the CERN OHL license.


The NeTV2 board as mounted on a Raspberry Pi

The design targets two major use scenarios which I refer to as “NeTV classic” mode (video overlays with encryption) and “Libre” mode (deep video processing, but limited to unencrypted feeds due to Section 1201).

In NeTV classic mode, the board is paired with a Raspberry Pi, which serves as the source for chroma key overlay video, typically rendered by a browser running in full-screen mode. The Raspberry Pi’s unencrypted HDMI video output is fed into the NeTV2 and sampled into a frame buffer, which is “genlocked” (e.g. timing synchronized) to a video feed that’s just passing through the FPGA via another pair of HDMI input/outputs. The NeTV2 has special circuits to help observe and synchronize with cryptographic state, should one exist on the pass-through video link. This allows the NeTV2 to encrypt the Raspberry Pi’s overlay feed so that the Pi’s pixels can be used for a simple “hard overlay” effect. NeTV classic mode thus enables applications such as subtitles and pop-up notifications by throwing away regions of source video and replacing it entirely with overlay pixels. However, a lack of access to unencrypted pixels disallows even basic video effects such as alpha blending or frame scaling.

In Libre mode, the board is meant to be plugged into a desktop PC via PCI-express. Libre mode only works with unencrypted video feeds, as the concept here is full video frames are sampled and buffered up inside NeTV2 so that it can be forwarded on to the host PC for further processing. Here, the full power of a GPU or x86 CPU can be applied to extract features and enhance the video, or perhaps portions of the video could even be sent to to the cloud for processing. Once the video has been processed, it is pushed back into the NeTV2 and sent on to the TV for viewing. Libre mode is perhaps the most interesting mode to developers, yet is very limited in every day applications thanks to Section 1201 of the DMCA. Still, it may be possible to craft demos using properly licensed, unencrypted video feeds.

The reference “gateware” (FPGA design) for the NeTV2 is written in Python using migen/LiteX. I previously compared the performance of LiteX to Vivado: for an NeTV2-like reference design, the migen/LiteX version consumes about a quarter the area and compiles in less than a quarter the time – a compelling advantage. migen/LiteX is a true open source framework for describing hardware, which relies on Xilinx’s free-to-download Vivado toolchain for synthesis, place/route, and bitstream generation. There is a significant effort on-going today to port the full open source FPGA backend tools developed by Clifford Wolf from the Lattice ICE40 FPGAs to the same Xilinx 7-series FPGAs used in NeTV2. Of course, designers that prefer to use the Vivado tools to describe and compile their hardware are still free to do so, but I am not officially supporting that design methodology.

I wanted to narrow the gap between development board and field deployable solution, so I’ve also designed a hackable case for the NeTV2. The case can hold the NeTV2 and a mated Raspberry Pi, and consists of three major parts, a top shell, bottom shell/back bezel, and a stand-alone front bezel. It also has light pipes to route key status LEDs to the plane of the back bezel. It’s designed to be easily disassembled using common screw drivers, and features holes for easy wall-mounting.

Most importantly, the case features extra space with a Peek Array on the inside for mounting your own PCBs or parts, and the front bezel is designed for easier fabrication using either subtractive or additive methodologies. So, if you have a laser cutter, you can custom cut a bezel using a simple, thin sheet of acrylic and slot it into the grooves circumscribing the end of the case. Or, if you have a low-res 3D printer, you can use the screw bosses to attach the bezel instead, and skip the grooves. When you’re ready to step up in volume, you can download the source file for the bezel and make a relatively simple injection mold tool for just the bezel itself (or the whole case, if you really want to!).

The flexibility of the PCI-express edge connector and the simplified bezel allows developers to extend the NeTV2 into a system well beyond the original design intention. Remember, for an FPGA, PCI-express is just a low-cost physical form factor for generic high speed I/O. So, a relatively simple to design and cheap to fabricate adapter card can turn the PCI-express card-edge connector into a variety of high-speed physical standards, including SATA, DisplayPort, USB3.0 and more. There’s also extra low-speed I/O in the header, so you can attach a variety of SPI or I2C peripherals through the same connector. This electrical flexibility, combined with PCBs mounted on the Peek Array and a custom bezel enables developers to build a customer-ready solutions with minimal effort and tooling investment.

The NeTV2 is funding now at Crowd Supply. I’m offering a version with a higher-capacity FPGA only for the duration of the campaign, so if you’re developer be sure to check that out before the campaign ends. If you think that reforming the DMCA is important but the NeTV2 isn’t your cup of tea, please consider supporting the EFF directly with a donation. Together we can reform Section 1201 of the DMCA, and win back fundamental freedoms to express and innovate in the digital age.

by bunnie at May 12, 2018 02:22 AM

Name that Ware, May 2018

The Ware for May 2018 is shown below.

This month’s contest isn’t about naming a ware. Instead, the challenge is to guess the total amount of vias + holes on the NeTV2 PCB. The closest (or first correct) guess will get a DVT-revision of the board (same as the one shown above) as a prize! Please note: depending on your country of residence you may have to cover import duties or VAT.

At the end of this month, once the contest is over, I’ll upload the PCB source files to a public repo. The contest would be just too easy otherwise :)

by bunnie at May 12, 2018 02:20 AM

Winner, Name that Ware April 2018

The ware for April 2018 is an SAE 316L stainless steel cerclage currently embedded in my right kneecap. There’s also a pair of 2.5mm holes drilled lengthwise through the patella, and a set of sutures woven into my quadriceps, neither of which show up in an X-ray. The hardware is to repair a double-break of my knee; I simultaneously suffered a patellar avulsion fracture and a tear of the quadriceps tendon. It’s rare to get both injuries at once, but hey why do anything half-assed. As for how it happened – well, you’ll have to believe me that it’s just from falling while walking on flat ground. There was a puddle of greasy/soapy water outside a hawker stall and boom, I’m unable to walk for months. I wish I had some cooler story, so that’s why I had this month’s competition – to try and find a better bar story. To that end, gratz to perlfriend for the most imaginative story; email me for your prize!

This was my first time going through major surgery. I’ve previously had some minor patches and cuts, where I only used local anesthesia and watched the surgeon perform. As a hardware guy, I think your body is the most fascinating piece of hardware you’ll ever own so I don’t miss an opportunity to take a peek inside when maintenance is required (to clarify, I find it totally gross to watch other people’s surgeries, but somehow when it’s my own body its different – my sense of wonder overtakes my aversion to blood).

So I’ve never been put full under before. When the anesthesiologist came to interview me, I asked what my options were for local anesthesia. She then tried to convince me that general anesthesia is very safe because they have this computer running some complicated algorithm that considers my weight and height to dose me correctly. Besides the fact that they had to estimate my weight because I couldn’t stand on a scale, my inner monologue was screaming “As a hardware engineer, putting my life in the hands of a computer sounds terrifying” but instead I manage to ask politely, “and what happens if the computer has a problem?” At which point she smiles and re-assures me that she’s constantly monitoring my vital signs and trimming the dosage.

Side note, a friend of mine later on pointed me to an article where apparently some infusion pumps have Wifi that’s on during surgery and the protocol is pathetically hackable. OK, medical equipment makers: there’s some equipment that simply should not be IoT, and an infusion pump is absolutely one of them. There is simply no valid reason for the computer attending to the dosage level of a potentially fatal chemical to be spending any cycles answering TCP packets. Death by DDoS is not an acceptable scenario. If you need to push an update, do it by a USB disk or a plug-in Ethernet jack, so they can at least air gap the damn thing during an operation.

Had I known about the wifi vulnerability of anesthesia infusion pumps, I probably would not have consented to general anesthesia, or at least asked to check that the model they were using had no IoT capability. But the anesthesiologist finally convinced me under the argument that the local procedure would require sticking a needle into my spinal cord and withdrawing cerebrospinal fluid, and I’m like, I don’t know what that is but it sounds important for my brain to work and I kind of like my spinal cord intact, so maybe it’s not worth the risk.

So, I finally consent to being put under. Propofol, the drug she used to initiate anesthesia, is pretty incredible. It acts within seconds, and despite taking over an hour to metabolise in the liver, it clears from the brain in minutes so you can come out of anesthesia quickly. I have to wonder how they discovered it, or developed it. Anyways, the anesthesiologist inserted an IV into my right hand – I appreciated her attentiveness to the detail that I’m left-handed – and proceeded to administer the propofol. I could definitely feel the propofol as it entered my system. I had the same feeling of pins and needles that one gets when a limb falls asleep, a sort of searing, prickling pain. The pain quickly shot up my arm, then a warm wave of prickling across my face, and then…

I’m in the recovery room, and my surgeon is there giving me the post-op interview. About 2.5 hours of my life went by and I had no memory of it. Sure enough, no nausea or dizziness – the anesthesiologist did a good job. I was lucid and talking to the surgeon about the operation. Then, maybe about five or maybe fifteen minutes later, my body goes into shock – I’m shivering uncontrollably, and I’m starting to become acutely aware of the pain in my knee.

Apparently, during the surgery, I was given Fentanyl and Ultiva, both potent opioid painkillers. They give it to you because even though you’re out, your body has autonomous responses to pain that can complicate surgery with excess bleeding and so forth. From what I read, both clear pretty quickly, so I’m guessing the onset of shock corresponded with the painkilling effect of the injected opioids wearing off.

I know exactly what drugs and equipment they used because at the conclusion of the surgery I was given a detailed itemized bill of everything involved; above is a tiny sample of it. It was fascinating to reverse engineer the surgery from the BOM. That’s how I knew they used a 2.5mm drill bit, how I knew they put a breathing tube in me, and that I also got injected with Aloxi to help suppress nausea. I actually sort of wish I knew about the drill bit ahead of time, because I would have asked to keep it. I figured they’re throwing it away so getting the surgical-grade 2.5mm bit that was used to drill holes in my body to use later on to drill mounting bosses would be pretty neat.

Just for the record, the overall hospital bill including surgeon’s fees, surgical materials, OT rental, recovery ward, drugs, X-rays and my first round of physiotherapy was about S$20.7k, or US$15.5k at the prevailing exchange rate. This is considered expensive by Singapore standards, as I went to one of the nicer private hospitals for the operation, but as I understand it is quite cheap by US standards. Fortunately my insurance covered all but about $350 of it.

This was also my first time using opioid painkillers in a serious way. I’m deeply concerned about the addictive nature of opioids, so I don’t take them lightly. I had taken Vicodin once before when I had a wisdom tooth removed, and I remember it having the effect of “moving the pain sideways” – it felt like the pain was just outside arms length, so I could ignore it if I wanted to, but I definitely knew it was still there. Knee surgery was about a hundred times more traumatic than getting a wisdom tooth removed, and for the first day there’s a terrific amount of pain. So they gave me Targin, which is a mix of oxycodone and naloxone.

Targin is a clever way to manage opioid addictiveness. Oxycodone is the opioid; naloxone is the stuff they use to treat opioid overdoses, and it blocks the painkilling effects. Nalaxone is mixed in with the oxycodone to prevent mis-use (crushing & snorting or injecting), and naloxone is poorly absorbed through the digestive system, so when taken orally the oxycodone can still work. For a short time. When the surgical painkillers wore out, basically all the pain receptors in my knee were screaming at me and informing me rather viscerally about what I already knew intellectually – that my knee was cut up, had holes drilled in it, wired up with stainless steel, fibers woven into the muscles, and stitched back together. So I got one dose of Targin, which kicked in in about 20-30 minutes. The oxycodone effect was pretty strong – the acute pain just went away, everything felt okay and I drifted into sleep…only to be woken up about 2-3 hours later by the pain in my knee, presumably due to the nalaxone component finally kicking in and suppressing the opioid effect. However, I was only allowed one Targin pill every 12 hours, and thus had only acetaminophen to manage the pain until the next scheduled dose. It was definitely manageable with some distraction, but there was no way I was going back to sleep again. I could see how oxycodone could be addictive; everything just seemed so okay despite everything being incredibly wrong, so I’m grateful they mixed it with nalaxone and administered it such that my final memory of the trip was being clobbered with my knee pain rather than a lingering desire for more.

Of course, the pain doen’t end in 24 hours, but they “graduated” me to a weaker painkiller, Ultracet. The Ultracet is a mix of Tramadol and acetaminophen. Tramadol itself really kicks in only after it’s processed by the liver into desmetramadol by the enzyme CYP2D6, and I’m heterozygous for a copy with reduced activity. I guess that might explain why when I took the prescribed dosage of two pills it was reasonably effective, but when I halved it to one pill I felt almost nothing at all. Overall, Tramadol was less effective at painkilling and more effective in just making me feel a little dizzy and sleepy. In addition to binding to opioid receptors, it’s also a seratonin releasing agent, a bit like MDMA. So during the daytime I would just deal with the pain by distracting myself with work, and at night time when trying to fall asleep, I’d take the Ultracet and sleep like a baby – I was sleeping almost 12 hours a night for the couple weeks after surgery, which I think was extremely helpful in my recovery. Significantly, Tramadol doesn’t quite suppress pain; like my one experience with Vicoden, it turns it into a fact I’m aware of but can deal with. When I would sleep, my dreams would be deep and lucid, but my brain would often reference the pain in my knee and ascribe it to some random weird dream explanation, like trying to go in for a big penalty kick in a soccer game over and over again. Fortunately, Tramadol didn’t feel like it had too much of an addiction risk for me – the awake-state “high” was a bit unpleasant, as I don’t particularly enjoy the dizziness or drowsiness. It was more of a sleeping aid, as it allowed my mind to let go of the pain in my knee while falling asleep. I was finally able to wean myself off painkillers entirely and fall asleep drug-free about two weeks after the operation, but in a weird way I ever so slightly missed that really sound sleep I’d have while on them. Opioid painkillers are no joke. They’re absolutely essential for dealing with pain, but take them with discipline and caution.

While the story behind the accident itself wasn’t very interesting, I did find going through the surgery, healthcare system, and drugs to be an interesting learning experience.

by bunnie at May 12, 2018 12:00 AM

April 29, 2018

Harald Welte

OsmoDevCon 2018 retrospective

One week ago, the annual Osmocom developer meeting (OsmoDevCon 2018) concluded after four long and intense days with old and new friends (schedule can be seen here).

It was already the 7th incarnation of OsmoDevCon, and I have to say that it's really great to see the core Osmocom community come together every year, to share their work and experience with their fellow hackers.

Ever since the beginning we've had the tradition that we look beyond our own projects. In 2012, David Burgess was presenting on OpenBTS. In 2016, Ismael Gomez presented about srsUE + srsLTE, and this year we've had the pleasure of having Sukchan Kim coming all the way from Korea to talk to us about his nextepc project (a FOSS implementation of the Evolved Packet Core, the 4G core network).

What has also been a regular "entertainment" part in recent years are the field trip reports to various [former] satellite/SIGINT/... sites by Dimitri Stolnikov.

All in all, the event has become at least as much about the people than about technology. It's a community of like-minded people that to some part are still working on joint projects, but often work independently and scratch their own itch - whether open source mobile comms related or not.

After some criticism last year, the so-called "unstructured" part of OsmoDevCon has received more time again this year, allowing for exchange among the participants irrespective of any formal / scheduled talk or discussion topic.

In 2018, with the help of c3voc, for the first time ever, we've recorded most of the presentations on video. The results are still in the process of being cut, but are starting to appear at https://media.ccc.de/c/osmodevcon2018.

If you want to join a future OsmoDevCon in person: Make sure you start contributing to any of the many Osmocom member projects now to become eligible. We need you!

Now the sad part is that it will take one entire year until we'll reconvene. May the Osmocom Developer community live long and prosper. I want to meet you guys for many more years at OsmoDevCon!

There is of course the user-oriented OsmoCon 2018 in October, but that's a much larger event with a different audience.

Nevertheless, I'm very much looking forward to that, too.

The OsmoCon 2018 Call for Participation is still running. Please consider submitting talks if you have anything open source mobile communications related to share!

by Harald Welte at April 29, 2018 10:00 PM

April 24, 2018

Bunnie Studios

Paper As a Substrate for Circuits

I’ve spent a considerable portion of my time in the past couple of years helping to develop products for Chibitronics, a startup that blends two unlikely bedfellows together, papercraft and electronics, to create paper circuits. The primary emphasis of Chibitronics is creating a more friendly way to learn, design and create electronics. Because of this, much of the material relating to paper circuitry on the Internet looks more like art than electronics.

This belies the capabilities of paper as an engineering material. Google’s Cardboard and Nintendo’s Labo are mainstream examples of paper’s extraordinary capability as an engineering material. Prof. Nadya Peek at the University of Washington has written several academic papers on building multi-axis CNC machines using paper products.

A couple points to clarify up top: for the sake of brevity, I will use the term “paper” instead of “paper and/or cardboard”, analogous to how one would refer to a PCB made of Kapton or FR-4 both as printed circuits. Furthermore, while Chibitronics focuses on providing solderless solutions for younger learners, the techniques shared in this post targets engineers who have the skill to routinely assemble modern SMT designs. I assume you’ve got a good soldering iron and a microscope, and you know how to use both (or perhaps are up to the challenge of learning how to use them better).

The Argument for Paper
For prototyping and learning the principles of electronics, paper has several distinct advantages over breadboards.

The primary advantage of a breadboard is that it’s solderless, and as a result you can re-use the components. This made a lot more sense back when a 6502 used to cost $25 in 1975 ($115 in 2018 money), but today the wire jumper kit for a solderless breadboard can cost more than the microcontroller. Considering also the relatively high cost of a solderless breadboard and the relatively low value of the parts, you’re probably better off buying extra parts and soldering them to disposable paper substrates than purchasing a re-usable solderless breadboard for all but the simplest of circuits.


Electronic components used to be really expensive, so you wanted to re-use them as much as possible. The 8-bit 6502 at $115 (adjusted for inflation) was considerably cheaper than its competition in 1975 (from Wikipedia).

On the other hand, paper has a number of important advantages. The first is that it’s compatible with surface mount ICs. This is increasingly important as chip vendors have largely abandoned DIP packages in favor of SMT packages: mobile computing represents the highest demand for chips, and SMT packages beat DIP packages in both thermal and parasitic electrical characteristics. So if you want a part that wasn’t designed by someone wearing a jean jacket and highwaters, you’re probably going to find it only available in SMT.


The evolution of packaging (from left to right): DIP, SOIC, TSSOP, and WLCSP. The WLCSP is shown upside-down so you can see how solder balls are applied directly to a naked silicon chip. It’s the asymptotic size limit of packaging, and is quite popular in mobile phones today.

The second and perhaps more important advantage is that it’s electrically similar to a printed circuit substrate. Breadboards feature long, loose wires with no sense of impedance control at all. Printed circuits are 2.5-D (e.g. planar multi-layer) constructions that feature short, flat wires and often times ground planes that enable impedance control. Paper circuit construction is much closer to that of a printed circuit, in that flat copper tape forms traces that can be layered on top of each other (using non-conductive tape to isolate the layers). Furthermore, when laid on top of a controlled-thickness substrate such as cardboard, the reverse side can be covered with a plane of copper tape, thus allowing for impedance control. The exact same equations govern impedance control in a paper circuit constructed with a ground plane as a printed circuit constructed with a ground plane – just the constants are different.


This equation works for both FR-4 and cardboard. Just plug in the corresponding ε, w, t, and h. From rfcafe.com.

This means you can construct RF circuits using paper electronic techniques — from directional couplers to antennae to amplifiers. The low parasitics of copper tape also means you can construct demanding circuits that would be virtually impossible to breadboard, such as high-power switching regulators, where ripple performance is heavily impacted by parasitic resistance and inductance in the ground connections.


A 10W, 5V buck regulator laid out with paper electronics. The final layout closely resembles the datasheet layout example and performs smoothly at 2A load; this circuit probably wouldn’t regulate properly at high loads if built with a SMT-to-DIP breakout and a breadboard.

In addition to impedance control and lower parasitics, the use of copper tape to form planes means paper electronics can push the power envelope by leveraging copper plans as heatsinks. This is an important technique in FR-4 based PCBs; in fact, for many chips, the dominant path for heat to escape a chip is not through the package surface, but instead through the pins and package traces.


Copper conducts heat about 1000x better than plastic, so even the tiny metal pins on a chip can conduct heat more efficiently from an IC than the surface of the plastic package. Flip-chip on lead frame graphic adapted from Electronic Design.

The copper which forms the pins and lead frames of a chip package is a vastly superior (about 1000x better) heat conductor compared to air or plastic, so a cheap and effective method of heatsinking is to lay out a large plane of copper connected to the chip. Below is an example of a 60-watt power driver that I built using paper electronics, leveraging a copper tape plane plus extra foil as heat sinks.


That’s a 12A power transistor, and this heater control circuit can use much of that ampacity. Additional copper foil was soldered on for extra heat sinking. The equivalent in DIP/TO packages might melt a breadboard during normal operation.

Paper electronics has one additional advantage that is unique to itself: the ability to fold and bend into 3-dimensional shapes. This is something that neither breadboards nor FR-4 circuit boards can readily do. Normally, circuit boards that can bend require more exotic processes like rigi-flex or flex PCB manufacture; but paper supports this natively. Artists take advantage of this property to create stunning electronic origami, but engineers can also use this property to great effect.


Trox Circuit Study 05 by Jonathan Bobrow, from the Paper Curiosities gallery

The ability to fold a sheet of cardboard or paper means that paper circuits can be slotted around tight corners and conformed to irregular or flexible surfaces, eliminating connectors and creating a thinner, sleeker packages. Need a test point? Cut a hanging tab out of your substrate, and you’ve got a fold-up point where you can attach an alligator clip!

Using Paper to Facilitate Prototyping with SMT
Here’s a detailed example of the construction techniques I use when working with paper electronics. I built a breakout board to solve a common problem: matching voltages between chips. Older chips are powered by 5V, newer ones by 1.8V or lower, and none of these are a match for your typical 3.3V-tolerante microcontroller. There are small circuits called “level shifters” that can safely take digital signals of one signal swing to another range. The problem is that most of the “good” ICs today come only in SMT packages, so if you’re prototyping on a breadboard or using alligator clips to cobble something together, you’ve got very limited options. In fact, one of my “go-to” ICs for this purpose is the 74LVC1T45; it’s capable of 420Mbps data rates, and can convert anywhere from 1.65V to 5.5V in a direction that can be selected using an input pin. The packaging options for this chip range from a DSBGA to a SOT-23 – clearly a chip targeted at the mobile phone generation, and not meant for breadboarding.

However, I’m often confronted with the problem of driving a WS281B LED strip from the I/O of a modern microprocessor. WS2812B LEDs operate off of 5V, and expect 5V CMOS levels; and no modern microprocessor can produce that. You can usually get away with driving a single WS2812B with a 3.3V-compatible I/O, but if you’re driving a long chain of them you’ll start to see glitches down the chain because of degraded timing margins due to improper voltage levels at the head of the chain. So, I’d love to have a little breakout board that adapts a SOT-23-sized 74LVC1T45 to an alligator-clip friendly format.

Instead of laying out a PCB, fabbing a one-off, and soldering it together, I took a piece of cardboard and built a breakout board in under an hour. Furthermore, because I can bend cardboard, I can make my breakout board dual-purpose: I can add pins to it that make it breadboard-compatible, while having fold-up “wings” for alligator clips. Without the ability to fold up, the alligator clip extensions would block access to the breadboard connections. Below are some shots of the finished project.


Native comments plus on-board decoupling caps makes this simple to use, even with long alligator clips


DIP pins coming out the bottom side allow this to be used in a breadboard, too


SMT, DIP, and alligator clips all coexisting in a single breakout — easy to do with paper!

The first step in making a paper circuit is to grab a suitable piece of cardboard. I’ve come to really enjoy the cardboard used to make high-quality mats for picture framing. It’s about 1.3-1.4mm thick, which is fairly similar to FR-4 thickness, and its laminate structure means you can score one side and make accurate folds into the third dimension. The material is also robust to soldering temperatures, and its dense fiber construction and surface coating keeps the paper surface intact when pulling off mild adhesives, like the ones found on copper tape.

I’ll then cut out a square about the size I think it needs to be. I’ll usually cut a little larger, because it’s trivial to trim it back later on, but janky to tape on an extension if it’s too small.

Then, I lay the components on top and sketch a layout – this one’s pretty simple, I just note where I want the SOT-23 to go, and where the breakouts should run to.

Once I’m happy with the sketch, I’ll lay down copper tape, solder on the components, and then fold/bend the breakout into the final shape.

The trickiest and most important technique to master is how to mate the copper tape to the tiny pins of the SOT-23 (or other SMT) package. I use a trick that Dr. Jie Qi taught me, which is to cut a set of triangular notches into the tip of a wider piece of copper tape of roughly the right pitch. The triangular shape lets you adjust the size of the landing pad by simply changing the gap between the two ends of the tape, alleviating the need for precise alignment. Then, once the component is soldered to the wide piece of copper tape, you take a knife and cut the tape into individual traces – voilà, an SMT breakout is born!

A lot of this is better shown not told, so I’ve created a little video, below, that walks you through the entire process of building the breakout.

Try Something Different, and You Might be Rewarded!
Paper as an electrical engineering material is something I would never have thought of on my own – I grew up prototyping with breadboards and wire-wrap, and I was prejudiced against paper as a cheap, throwaway material that I incorrectly thought was too flammable to solder. Instead, I spent hundreds of dollars on breadboards and wire wrap sockets, when I could have made do with much cheaper materials. Indeed, there is an irrational psychology that regards expensive things as inherently better than cheap things, which means cheap options are often overlooked in the search for solutions to hard problems.

But this is why it’s important to collaborate with experts outside your normal field of expertise – the further outside, the better. In addition to being a great engineer, Jie Qi is a prodigious artist. Through our Chibitronics collaboration, she’s added so much more depth and dimension to my world on so many fronts. She’s imparted upon me invaluable gifts of skills and perspectives that I would never have developed otherwise.

It’s my hope that by sharing a little more about paper electronics, I can bring a fresh perspective on old problems while broadening awareness and getting more users to improve upon the basics. After all, this is a new area, and we’re just starting to explore the possibilities.

Interested in hacking paper electronics? Check out the Chibitronics Creative Coding Kit, and the Love to Code product line. It’s a gentle introduction to paper electronics targeted at newcomers, but it’s also open source, so you can take it as far as your imagination can go — hook up a JTAG box, build the OS, and get hacking! Get 30% off the Creative Coding Kit with the TRY-LTC-18 coupon code until June 30, 2018!


Quick edit: some basic techniques on using copper tape are documented at Chibitronics’ Copper Tape Chronicles. It’s a small compilation of videos like the one below:

Also, here’s a handy tip on how to keep copper tape from falling off the roll:

by bunnie at April 24, 2018 12:20 PM

April 23, 2018

Bunnie Studios

Name that Ware, April 2018

The Ware for April 2018 is shown below:

It’s a simple ware, but you could say I’m rather personally attached to it. Prize to the funniest/most creative story about this ware :)

by bunnie at April 23, 2018 01:40 PM

Winner, Name that Ware March 2018

The Ware for March 2018 is a Fuso 150 Fish Finder. Mangel hit the nail on the head with this guess — excellent work, as always! email me for your prize :)

by bunnie at April 23, 2018 01:39 PM

April 22, 2018

Harald Welte

osmo-fl2k - Using USB-VGA dongles as SDR transmitter

Yesterday, during OsmoDevCon 2018, Steve Markgraf released osmo-fl2k, a new Osmocom member project which enables the use of FL2000 USB-VGA adapters as ultra-low-cost SDR transmitters.

How does it work?

A major part of any VGA card has always been a rather fast DAC which generates the 8-bit analog values for (each) red, green and blue at the pixel clock. Given that fast DACs were very rare/expensive (and still are to some extent), the idea of (ab)using the VGA DAC to transmit radio has been followed by many earlier, mostly proof-of-concept projects, such as Tempest for Eliza in 2001.

However, with osmo-fl2k, for the first time it was possible to completely disable the horizontal and vertical blanking, resulting in a continuous stream of pixels (samples). Furthermore, as the supported devices have no frame buffer memory, the samples are streamed directly from host RAM.

As most USB-VGA adapters appear to have no low-pass filters on their DAC outputs, it is possible to use any of the harmonics to transmit signals at much higher frequencies than normally possible within the baseband of the (max) 157 Mega-Samples per seconds that can be achieved.

osmo-fl2k and rtl-sdr

Steve is the creator of the earlier, complementary rtl-sdr software, which since 2012 transforms USB DVB adapters into general-purpose SDR receivers.

Today, six years later, it is hard to think of where SDR would be without rtl-sdr. Reducing the entry cost of SDR receivers nearly down to zero has done a lot for democratization of SDR technology.

There is hence a big chance that his osmo-fl2k project will attain a similar popularity. Having a SDR transmitter for as little as USD 5 is an amazing proposition.

free riders

Please keep in mind that Steve has done rtl-sdr just for fun, to scratch his own itch and for the "hack value". He chose to share his work with the wider public, in source code, under a free software license. He's a very humble person, he doesn't need to stand in the limelight.

Many other people since have built a business around rtl-sdr. They have grabbed domains with his project name, etc. They are now earning money based on what he has done and shared selflessly, without ever contributing back to the pioneering developers who brought this to all of us in the first place.

So, do we want to bet if history repeats itself? How long will it take for vendors showing up online advertising the USB VGA dongles as "SDR transmitter", possibly even with a surcharge? How long will it take for them to include Steve's software without giving proper attribution? How long until they will violate the GNU GPL by not providing the complete corresponding source code to derivative versions they create?

If you want to thank Steve for his amazing work

  • reach out to him personally
  • contribute to his work, e.g.
  • help to maintain it
  • package it for distributions
  • send patches (via osmocom-sdr mailing list)
  • register an osmocom.org account and update the wiki with more information

And last, but not least, carry on the spirit of "hack value" and democratization of software defined radio.

Thank you, Steve! After rtl-sdr and osmo-fl2k, it's hard to guess what will come next :)

by Harald Welte at April 22, 2018 10:00 PM

March 31, 2018

Bunnie Studios

Name that Ware, March 2018

The Ware for March 2018 is shown below.

Thanks to Charl for contributing this ware!

by bunnie at March 31, 2018 03:15 PM

Winner, Name that Ware February 2018

The ware for February 2018 is an Ethernet card by Digitel, a Brazilian manufacturer, circa 1992. Brazil is an interesting market because protectionist trade measures made import electronics very expensive. The nominal theory, as it was explained to me, was to protect and encourage local industries, thus creating and maintaining high-paying local jobs. I had never seen a piece of electronics from Brazil, but indeed, many of the circuit board’s components bear company logos I had never seen before and a Brazilian country of origin. While at least facially it seems the trade policies created local jobs, a comparison of this card to its contemporaries outside Brasil — such as this 1992-vintage SMC “Elite 16” Ethernet card featured at vintagecomputer.net — gives a hint at how these policies might have also impeded the progress of technology.

While 0x3d named the ware almost immediately, I really appreciated the cultural insight that Paulo Peres shared about the ware. For example, the fact that the ROM labeled MAQUEST is probably “MAQuina de ESTado” (state machine) and could have been a hack at the time to use locally-produced components to substitute for imported components. Even though in a free market a ceramic EEPROM + 74-series registers would be much more expensive than a PAL, the fact that the EEPROM and registers were produced in Brazil would have made the combo cheaper than an imported PAL once the trade tariffs were factored in. So congrats Paulo, email me for your prize! Although, my understanding is the trade barriers are still in place to this day, so maybe sending you something from overseas would cost you much more duty than it’s worth if you’re located in Brazil… :-O

by bunnie at March 31, 2018 03:15 PM

March 29, 2018

Harald Welte

udtrace - Unix domain socket tracing

When developing applications that exchange data over sockets, every so often you'd like to analyze exactly what kind of data is exchanged over the socket.

For TCP/UDP/SCTP/DCCP or other IP-based sockets, this is rather easy by means of libpcap and tools like tcpdump, tshark or wireshark. However, for unix domain socket, unfortunately no such general capture/tracing infrastructure exists in the Linux kernel.

Interestingly, even after searching for quite a bit I couldn't find any existing tools for this. This is surprising, as unix domain sockets are used by a variety of programs, from sql servers to bind8 ndc all the way to the systemctl tool to manage systemd.

In absence of any kernel support, the two technologies I can think of to implement this is either systemtap or a LD_PRELOAD wrapper.

However, I couldn't find an example for using either of those two to get traces of unix domain soocket communications.

Ok, so I get to write my own. My first idea hence was to implement something based on top of systemtap, the Linux kernel tracing framework. Unfortunately, systemtap was broken in Debian unstable (which I use for decades) at the time, so I went back to the good old LD_PRELOAD shim library / wrapper approach.

The result is called udtrace and can be found at

git clone git://git.gnumonks.org/udtrace

or alternatively via its github mirror.

Below is a copy+paste of its README file. Let's hope this tool is useful to other developers, too:

udtrace - Unix Domain socket tracing

This is a LD_PRELOAD wrapper library which can be used to trace the data sent and/or received via unix domain sockets.

Unlike IP based communication that can be captured/traced with pcap programs like tcpdump or wireshark, there is no similar mechanism available for unix domain sockets.

This LD_PRELOAD library intercepts the C library function calls of dynamically linked programs. It will detect all file descriptors representing unix domain sockets and will then print traces of all data sent/received via the socket.

Usage

Simply build libudtrace.so using the make command, and then start your to-be-traced program with

LD_PRELOAD=libudtrace.os

e.g.

LD_PRELOAD=libudtrace.so systemctl status

which will produce output like this:

>>> UDTRACE: Unix Domain Socket Trace initialized (TITAN support DISABLED)
>>> UDTRACE: Adding FD 4
>>> UDTRACE: connect(4, "/run/dbus/system_bus_socket")
4 sendmsg W 00415554482045585445524e414c20
4 sendmsg W 3331333033303330
4 sendmsg W 0d0a4e45474f54494154455f554e49585f46440d0a424547494e0d0a
[...]

Output Format

Currently, udtrace will produc the following output:

At time a FD for a unix domain socket is created:

>>> UDTRACE: Adding FD 8

At time a FD for a unix domain socket is closed:

>>> UDTRACE: Removing FD 8

At time a FD for a unix domain socket is bound or connected:

>>> UDTRACE: connect(9, "/tmp/mncc")

When data is read from the socket:

9 read R 00040000050000004403000008000000680000001c0300002c03000000000000

When data is written to the socket:

9 write W 00040000050000004403000008000000680000001c0300002c03000000000000
Where
  • 9 is the file dsecriptor on which the event happened
  • read/write is the name of the syscall, could e.g. also be sendmsg / readv / etc.
  • R|W is Read / Write (from the process point of view)
  • followed by a hex-dump of the raw data. Only data successfully written (or read) will be printed, not the entire buffer passed to the syscall. The rationale is to only print data that was actually sent to or received from the socket.

TITAN decoder support

Getting hex-dumps is nice and fine, but normally one wants to have a more detailed decode of the data that is being passed on the socket.

For TCP based protocols, there is wireshark. But most protocols on unix domain sockets don't follow inter-operable / public standards, so even if one was to pass the traces into wireshark somehow, there would be no decoder.

In the Osmocom project, we already had some type definitions and decoders for our protocols written in the TTCN-3 programming language, using Eclipse TITAN. In order to build those decoders fro MNCC and PCUIF, please use

make ENABLE_TITAN=1

when building the code.

Please note that this introduces a run-time dependency to libttcn3-dynamic.so, which is (at least on Debian GNU/Linux) not installed in a default library search path, so you will have to use something like:

LD_LIBRARY_PATH=/usr/lib/titan LD_PRELOAD=libudtrace.so systemctl status

by Harald Welte at March 29, 2018 10:00 PM

March 19, 2018

Bunnie Studios

An Intuitive Motor: IQ Control’s Serial-to-Position Module

Back when I was a graduate student, my advisor Tom Knight bestowed upon me many excellent aphorisms. One of them was, “just wrap a computer around it!” – meaning, rather than expending effort to build more perfect systems, wrap imperfection-correcting computers around imperfect systems.

An everyday example of this is the noise-cancelling headphone. Headphones offer imperfect noise cancellation, but by “wrapping a computer around it” – adding one or more microphones and a computer in the from of a digital signal processor (DSP) – the headphones are able to measure the ambient noise and drive the headphones with the exact inverse of the noise, thus cancelling out the surrounding noise and creating a more perfect listening experience.

Although the principle has found its way rapidly into consumer goods, it’s been very slow to find its way onto the engineer’s workbench. It’s the case of the cobbler’s children having no shoes.

In particular, it’s long bothered me that motors are so dumb. Motors are typically large, heavy, costly, power-hungry, and riddled with small mechanical imperfections. In comparison, microcontrollers are tiny, cheap, power-efficient, and could run software that trims imperfections while improving efficiency to the point where the motor + microcontroller combo is a win over a dumb motor on almost every metric. So why aren’t we wrapping a computer around every motor and just calling it a day?

Then one day a startup called IQ Motion Control showed me a demo of their smart motor, the IQ Position Module, at HAX in Shenzhen. My eyes instantly lit up – these guys have done it, and done it in a tasteful manner. This is the motor I’ve been waiting years for!

Meet the IQ Position Module
Simply put, the IQ Position Module is a brushless DC motor that talks serial and “thinks” at a higher level. I don’t have to design any complicated drive circuitry or buy a proprietary controller that talks some arcane or closed standard. I just plug an FTDI cable into my laptop, hook up power, clone a small git repo and I’m good to go.

Because of the microcontroller on the inside, the IQ Position Module can emulate a range of behaviors, from a simple stepper to a range of BLDC drive standards, but the real magic happens when you tell it where you want it to go and how fast, and it figures out the best way to get there.

“But wait”, you say, “my servos and brushed DC motors can do that just fine, I just control the pulse width!” This is true for crude and slow motion control applications, but if you really want to run at high speeds – like the ones achievable by a BLDC – you have to consider things like acceleration and deceleration profiles.

The video below shows what I mean. Here is a pre-production IQ Position Module that’s being commanded to turn once in two seconds; then twice, three times, and finally ten times in two seconds. The motor can go even faster, but the figurine I attached on top isn’t balanced well enough to do that safely. Notice how the speed “ramps up” and back down again, so that the motor stops with the figurine in precisely the same position at the end of every cycle, regardless of how fast I commanded the motor to turn.

That is magic.

And here’s a snippet of the core code used in the above demo, to give you an idea of how simple the API can be:

Just tell it where you want it, and by when — and the motor figures out an acceleration profile. Of course other parameters can be tweaked but the default behavior is reasonable enough!

A Motor That’s Also an Input Device
But wait! There’s more. Because this is a “direct drive” system, there’s no gears to shear. Anyone who has busted a geared servo motor by stalling or back-driving it knows what I mean. IQ Position Modules don’t have this problem. When you stop driving the IQ module – put it in a “coast” mode – it turns freely and without resistance.

This means the IQ motor doesn’t just “write” motion – it can “read” motion as well. Below is a video of a simple motion copy demo I cooked up in about an hour (including time spent refactoring the original API), where I implement bidirectional read/write of motion between two IQ Position Modules.

The ability to tolerate back-drive and also “go limp” is advantageous in robotics applications. Impact-oriented tasks — such as hammering a nail or kicking a ball — would rapidly degrade the teeth in a geared drive train. Furthermore, natural human motion incorporates the ability to go limp, such as the forward swing of a leg during walking. Finally, biological muscles are capable of applying a static force without changing position, such as when holding a cup on its sides without crushing it. Roboticists have developed a wide range of specialty actuators and techniques, including series elastic and variable stiffness actuators to address these scenarios. However, these mechanisms are often complicated and pricey.

The IQ Position Module’s lack of gearing means it’s back-drive tolerant, and it can apply an open-loop force without any risk of damage. This means you could, for example, use it to build a robot arm that can hammer a nail or pick up a cup. Robotic elements built using these would have far greater resilience to motion interference and impact forces than ones built using geared servos.

Having Fun with the IQ Position Module
While attending 34C3 back in December 2017, I managed to sit down for about an hour with my good friends Prof. Nadya Peek and Ole Steinhauer, and we built a 2-axis robot arm that could do kinesthetic learning through keyframing, using nothing more than two IQ Position Modules, a Dunkin’ Donuts box, a bunch of schwag stickers we stole from the FOSSASIA assembly, and the base plate of an old PS4 … because fail0verflow.

This was improvisational making at its best; we didn’t really plan the encounter so much as it emerged out of the chaos that is the Computer Congress. While Nadya was busy cutting, folding, and binding the cardboard into a 2-axis robot arm, Ole “joined” (#lötwat?!) together the power & serial connectors, and I furiously wrote the code that would do the learning and playback — while also doing my best to polish off a couple beers. Nadya methodically built one motion axis first, and we tested it; satisfied with the result, she built and stacked a second axis on top. With just a bit of tweaking and prodding we managed to pull off the demo below:

It’s a little janky, but given the limited materials and time frame for execution, it hints at the incredible promise that IQ Position Modules hold.

So, if you’ve ever wanted to dabble in robotics or motion control, but have been daunted by control theory and arcane driver protocols (like I’ve been), check out the IQ Position Module. They are crowdfunding now at CrowdSupply. I backed their campaign to reserve a few more Position Modules for my lab – by wrapping a smart computer around a dumb motor, they’ve created a widget that lets me go from code to physical position and back with a minimal amount of wiring and an accessible API.

Their current funding campaign heavily emphasizes the capabilities of their motor as a “better BLDC” for the lucrative drone market, and I respect their wisdom in focusing their campaign message around a single, economically significant vertical. A cardinal sin of marketing revolutionary tech is to sell it as a floor wax and a dessert topping — as painful as it may be, you have to pick just one message and push hard around it. However, I’m happy they are offering the IQ Position Module as part of the campaign, and enabling me to express my enthusiasm to the maker and robotics communities. I’ve waited too long to have a motor with this capability in my toolbox — finally, the cobbler’s children has shoes!

by bunnie at March 19, 2018 12:49 PM

March 12, 2018

Dieter Spaar

Communication on a FlexRay bus

FlexRay is an automotive bus system which is not as common as the CAN bus, but is used in several car brands, e.g. BMW, Mercedes and Audi. I won't go into the details of FlexRay, there are several good introductions elsewhere.

What is needed to read the data on a FlexRay bus, or even better, actively participate in the communication (send your own data)? Compared to the CAN bus a FlexRay bus is complicated:

  • Sending and receiving on a CAN bus is simple, similar to serial (UART) communication: You basically only need to know the bus speed and you are done.
  • For a FlexRay bus you have to know about 50 more or less important parameters which define things like the length and number of the frames in the static segment or the frames in the dynamic segment.

If you are only interested in seeing the data on the FlexRay bus, than it is actually pretty simple, knowing the communication speed (typically 10 Mbit/s) is enough. There are several oscilloscopes and logic analyzers which can decode the FlexRay protocol. Such a decoder is simple, I use a few lines of AWK script to process the exported CSV file from a cheap logic analyzer to do so (AWK just to show how simple it is).

However if you also want to send data, you need to know nearly all the parameters, otherwise you would most certainly just disturb the bus communication. There are several (expensive) FlexRay analyzers available, which can help to solve this problem.

I wanted to find out if it is also possible to get this done with a relatively cheap (around 100 EUR) development board with a FlexRay interface. While I won't go into the details (maybe this is topic for an upcoming talk), I will just present my experimental setup:

I started with two ECUs (gateway and engine ECU) from a BMW with FlexRay. Those two ECUs are enough for a properly running FlexRay bus (each of those ECUs is a so called coldstart node, you need at least two of them to get the FlexRay bus up and running). To get the development board running with this setup you could start with coarse communication parameters (maybe from measurements with an oscilloscope) and fine-tune them until you can communicate (receive and send frames) without errors. This actually worked quite well.

The next setup were several ECUs from an Audi with FlexRay: I had the gateway, ACC radar and front camera. However only the gateway is a coldstart node, so the FlexRay bus would not start with only those ECUs. I could have bought a second coldstart node ECU (either ABS or airbag), however those ECUs for this specific car are rather expensive. Additionally I wanted to see if it is possible to program the development board as a coldstart node, so I decided to go this way. The problem now is that you don't have a running FlexRay bus to get your first estimation of the communication parameters: the single coldstart node trying to start the bus will only give you a few of them (basically you only have one frame from the static segment). The communication parameters from the BMW won't help also, Audi uses something else (only the 10 Mbit/s bus speed and the 5 ms cycle time are the same). Again I skip the details, but all problems could be resolved and the development board acts as a coldstart node to get the bus running and of course can also properly communicate on the bus.

Lessons learned: You don't necessarily need expensive tools to solve a problem which seems complicated on the first look. If you are willing to spend some time, you can succeed with rather cheap equipment. The additional benefit is that you learn a lot from such an approach.

March 12, 2018 01:00 AM

March 06, 2018

Harald Welte

Report from the Geniatech vs. McHardy GPL violation court hearing

Today, I took some time off to attend the court hearing in the appeal hearing related to a GPL infringement dispute between former netfilter colleague Partrick McHardy and Geniatech Europe

I am not in any way legally involved in the lawsuit on either the plaintiff or the defendant side. However, as a fellow (former) Linux kernel developer myself, and a long-term Free Software community member who strongly believes in the copyleft model, I of course am very interested in this case.

History of the Case

This case is about GPL infringements in consumer electronics devices based on a GNU/Linux operating system, including the Linux kernel and at least some devices netfilter/iptables. The specific devices in question are a series of satellite TV receivers built by a Shenzhen (China) based company Geniatech, which is represented in Europe by Germany-based Geniatech Europe GmbH.

The Geniatech Europe CEO has openly admitted (out of court) that they had some GPL incompliance in the past, and that there was failure on their part that needed to be fixed. However, he was not willing to accept an overly wide claim in the preliminary injunction against his company.

The history of the case is that at some point in July 2017, Patrick McHardy has made a test purchase of a Geniatech Europe product, and found it infringing the GNU General Public License v2. Apparently no source code (and/or written offer) had been provide alongside the binary - a straight-forward violation of the license terms and hence a violation of copyright. The plaintiff then asked the regional court of Cologne to issue a preliminary injunction against the defendant, which was granted on September 8th,2017.

In terms of legal procedure, in Germany, when a plaintiff applies for a preliminary injunction, it is immediately granted by the court after brief review of the filing, without previously hearing the defendant in an oral hearing. If the defendant (like in this case) wishes to appeal the preliminary injunction, it files an appeal which then results in an oral hearing. This is what happened, after which the district court of cologne (Landgericht Koeln) on October 20, 2017 issued ruling 14 O 188/17 partially upholding the injunction.

All in all, nothing particularly unusual about this. There is no dispute about a copyright infringement having existed, and this generally grants any of the copyright holders the right to have the infringing party to cease and desist from any further infringement.

However, this injunction has a very wide scope, stating that the defendant was to cease and desist not only from ever publishing, selling, offering for download any version of Linux (unless being compliant to the license). It furthermore asked the defendant to cease and desist

  • from putting hyperlinks on their website to any version of Linux
  • from asking users to download any version of Linux

unless the conditions of the GPL are met, particularly the clauses related to providing the complete and corresponding source code.

The appeals case at OLG Cologne

The defendant now escalated this to the next higher court, the higher regional court of Cologne (OLG Koeln), asking to withdraw the earlier ruling of the lower court, i.e. removing the injunction with its current scope.

The first very positive surprise at the hearing was the depth in which the OLG court has studied the subject matter of the dispute prior to the hearing. In the many GPL related court cases that I witnessed so far, it was by far the most precise analysis of how Linux kernel development works, and this despite the more than 1000 pages of filings that parties had made to the court to this point.

Just to give you some examples:

  • the court understood that Linux was created by Linus Torvalds in 1991 and released under GPL to facilitate the open and collaborative development
  • the court recognized that there is no co-authorship / joint authorship (German: Miturheber) in the Linux kernel as a whole, as it was not a group of people planning+developing a given program together, but it is a program that has been released by Linus Torvalds and has since been edited by more than 15.000 developers without any "grand joint plan" but rather in successive iterations. This situation constitutes "editing authorship" (German: Bearbeiterurheber)
  • the court further recognized that being listed as "head of the netfilter core team" or a "subsystem maintainer" doesn't necessarily mean that one is contributing copyrightable works. Reviewing thousands of patches doesn't mean you own copyright on them, drawing an analogy to an editorial office at a publisher.
  • the court understood there are plenty of Linux versions that may not even contain any of Patric McHardy's code (such as older versions)

After about 35 minutes of the presiding judge explaining the court's understanding of the case (and how kernel development works), it went on to summarize the summary of their internal elaboration at the court prior to the meeting.

In this summary, the presiding judge stated very clearly that they believe there is some merit to the arguments of the defendant, and that they would be inclined in a ruling favorable to the defendant based on their current understanding of the case.

He cited the following main reasons:

  • The Linux kernel development model does not support the claim of Patrick McHardy having co-authored Linux. In so far, he is only an editing author (Bearbeiterurheber), and not a co-author. Nevertheless, even an editing author has the right to ask for cease and desist, but only on those portions that he authored/edited, and not on the entire Linux kernel.
  • The plaintiff did not sufficiently show what exactly his contributions were and how they were forming themselves copyrightable works
  • The plaintiff did not substantiate what copyrightable contributions he has made outside of netfilter/iptables. His mere listing as general networking subsystem maintainer does not clarify what his copyrightable contributions were
  • The plaintiff being a member of the netfilter core team or even the head of the core team still doesn't support the claim of being a co-author, as netfilter substantially existed since 1999, three years before Patrick's first contribution to netfilter, and five years before joining the core team in 2004.

So all in all, it was clear that the court also thought the ruling on all of Linux was too far-fetching.

The court suggested that it might be better to have regular main proceedings, in which expert witnesses can be called and real evidence has to be provided, as opposed to the constraints of the preliminary procedure that was applied currently.

Some other details that were mentioned somewhere during the hearing:

  • Patrick McHardy apparently unilaterally terminated the license to his works in an e-mail dated 26th of July 2017 towards the defendant. According to the defendant (and general legal opinion, including my own position), this is in turn a violation of the GPLv2, as it only allowed plaintiff to create and publish modified versions of Linux under the obligation that he licenses his works under GPLv2 to any third party, including the defendant. The defendant believes this is abuse of his rights (German: Rechtsmissbraeuchlich).
  • sworn affidavits of senior kernel developer Greg Kroah-Hartman and current netfilter maintainer Pablo Neira were presented in support of some of the defendants claims. The contents of those are unfortunately not public, neither is the contents of the sworn affidavists presented by the plaintiff.
  • The defendant has made substantiated claims in his filings that Patrick McHardy would perform his enforcement activities not with the primary motivation of achieving license compliance, but as a method to generate monetary gain. Such claims include that McHardy has acted in more than 38 cases, in at least one of which he has requested a contractual penalty of 1.8 million EUR. The total amount of monies received as contractual penalties was quoted as over 2 million EUR to this point. Please note that those are claims made by the defendant, which were just reproduced by the court. The court has not assessed their validity. However, the presiding judge explicitly stated that he received a phone calls about this case from a lawyer known to him personally, who supported that large contractual penalties are being paid in other related cases.
  • One argument by the plaintiff seems to center around being listed as a general kernel networking maintainer until 2017 (despite his latest patches being from 2015, and those were netfilter only)

Withdrawal by Patrick McHardy

At some point, the court hearing was temporarily suspended to provide the legal representation of the plaintiff with the opportunity to have a Phone call with the plaintiff to decide if they would want to continue with their request to uphold the preliminary injunction. After a few minutes, the hearing was resumed, with the plaintiff withdrawing their request to uphold the injunction.

As a result, the injunction is now withdrawn, and the plaintiff has to bear all legal costs (court fees, lawyers costs on both sides).

Personal Opinion

For me, this is all of course a difficult topic. With my history of being the first to enforce the GNU GPLv2 in (equally German) court, it is unsurprising that I am in favor of license enforcement being performed by copyright holders.

I believe individual developers who have contributed to the Linux kernel should have the right to enforce the license, if needed. It is important to have distributed copyright, and to avoid a situation where only one (possibly industry friendly) entity would be able to take [legal] action.

I'm not arguing for a "too soft" approach. It's almost 15 years since the first court cases on license violations on (embedded) Linux, and the fact that the problem still exists today clearly shows the industry is very far from having solved a seemingly rather simple problem.

On the other hand, such activities must always be oriented to compliance, and compliance only. Collecting huge amounts of contractual penalties is questionable. And if it was necessary to collect such huge amounts to motivate large corporations to be compliant, then this must be done in the open, with the community knowing about it, and the proceeds of such contractual penalties must be donated to free software related entities to prove that personal financial gain is not a motivation.

The rumors of Patrick performing GPL enforcement for personal financial gain have been around for years. It was initially very hard for me to believe. But as more and more about this became known, and Patrick would refuse to any contact requests by his former netfilter team-mates as well as the wider kernel community make it hard to avoid drawing related conclusions.

We do need enforcement, both out of court and in court. But we need it to happen out of the closet, with the community in the picture, and without financial gain to individuals. The "principles of community oriented enforcement" of the Software Freedom Conservancy as well as the more recent (but much less substantial) kernel enforcement statement represent the most sane and fair approach for how we as a community should deal with license violations.

So am I happy with the outcome? Not entirely. It's good that an over-reaching injunction was removed. But then, a lot of money and effort was wasted on this, without any verdict/ruling. It would have been IMHO better to have a court ruling published, in which the injunction is substantially reduced in scope (e.g. only about netfilter, or specific versions of the kernel, or specific products, not about placing hyperlinks, etc.). It would also have been useful to have some of the other arguments end up in a written ruling of a court, rather than more or less "evaporating" in the spoken word of the hearing today, without advancing legal precedent.

Lessons learned for the developer community

  • In the absence of detailed knowledge on computer programming, legal folks tend to look at "metadata" more, as this is what they can understand.
  • It matters who has which title and when. Should somebody not be an active maintainer, make sure he's not listed as such.
  • If somebody ceases to be a maintainer or developer of a project, remove him or her from the respective lists immediately, not just several years later.
  • Copyright statements do matter. Make sure you don't merge any patches adding copyright statements without being sure they are actually valid.

Lessons learned for the IT industry

  • There may be people doing GPL enforcement for not-so-noble motives
  • Defending yourself against claims in court can very well be worth it, as opposed to simply settling out of court (presumably for some money). The Telefonica case in 2016 <>_ has shown this, as has this current Geniatech case. The legal system can work, if you give it a chance.
  • Nevertheless, if you have violated the license, and one of the copyright holders makes a properly substantiated claim, you still will get injunctions granted against you (and rightfully so). This was just not done in this case (not properly substantiated, scope of injunction too wide/coarse).

Dear Patrick

For years, your former netfilter colleagues and friends wanted to have a conversation with you. You have not returned our invitation so far. Please do reach out to us. We won't bite, but we want to share our views with you, and show you what implications your actions have not only on Linux, but also particularly on the personal and professional lives of the very developers that you worked hand-in-hand with for a decade. It's your decision what you do with that information afterwards, but please do give us a chance to talk. We would greatly appreciate if you'd take up that invitation for such a conversation. Thanks.

by Harald Welte at March 06, 2018 11:00 PM

February 27, 2018

Bunnie Studios

Name that Ware, February 2018

The Ware for February 2018 is shown below.

Ware courtesy of Hernandi Krammes!

Every board designer leaves a thumbprint on their ware — and this one is from a region I had never previously seen a ware from before. So while probably easy to guess the function, I still appreciated it for the small, unique details.

by bunnie at February 27, 2018 08:20 AM

Winner, Name that Ware January 2018

The Ware for January 2018 is a front panel VFD/switch controller board for an HP Laserjet 4+. Archels nailed it — I checked u19pb1996 in Google for hits and nothing came up, but maybe I was too hasty and typo’d the number when cross-checking the image. Anyways, this post is now the top hit for that part number :) Congrats, email me for your prize.

+1 to zebonaut’s comment about the firmware code from the early 90’s never needing an update, ever…they just don’t write software like they used to anymore! It’s funny to see the panic in the eyes of a modern software developer when you tell them a subsystem has no firmware update path, ever, and their code just has to work reliably from day one. And then you tell a hardware developer the same thing and they go “yah, so?”…there’s no such thing as a downloadable hardware update, of course the product ships complete, working & tested. And not only does the hardware have to work, it carries a warranty, unlike most software…

by bunnie at February 27, 2018 08:18 AM

February 11, 2018

Bunnie Studios

When More is Less: China’s Perception of the iPhone X “Notch”

I recently saw a Forbes article citing rumors that the iPhone X is being cancelled this summer. Assuming the article is correct, it claims that a “lack of interest in China” is the main reason for the relatively early cancellation of production. They’re hoping that 6.1″ and 6.5″ versions of their phone with a less pronounced Face ID notch would excite Chinese customers.

The notion of a “less pronounced” Face ID notch is what got me — Apple embracing the notch as iconic, and worth carrying forward to future models, rather than simply making the top bezel a bit larger and eliminating the notch altogether. Historically, Apple has taken a “less is more” strategy, meticulously eliminating even the tiniest design facets: replacing radii with splines, polishing off injection mold parting lines, even eliminating the headphone jack. Putting a notch on the iPhone feels a bit like watching a woman painstakingly apply face whitening cream day after day to remove tiny blemishes, and then don a red clown nose.

Like the red clown nose, the problem with pushing the notch is that anyone can put one on, should they decide it’s a feature they want to copy. Xobs recently showed me an app on his Xiaomi Mix 2 that does exactly that. Below is what his Xiaomi Mix 2 looks like normally.

It’s got a screen that goes right up to the top bezel, without a notch.

Interestingly, there’s an app you can run called “X out of 10” that simply draws in the notch (including subtle details like a simulated camera lens). Here’s the app in the off state:

And now in the on state:

Once activated and given permission to draw over other apps, Xiaomi Mix 2 users can don the red clown nose and experience the full glory of the iconic Apple notch all the time:

This glass-half-empty situation is a parable for design leadership versus market perception: if a market previously lacked a smartphone with a minimal top bezel, the notch is perceived as “How innovative! I’ve got extra pixels to the left and right of my earpiece/camera assembly!”. But once a market has seen a smartphone with minimal top bezel, the notch turns into “Hey where did my pixels go? What’s this notch doing here?”. It’s a case where the additional design feature is seen as a loss of function, not a gain.

Thus it will be interesting to see if Apple’s bet to introduce a phone with a larger screen that can compete head to head in China against the likes of the Xiaomi Mix 2’s 6″ screen will pay out, especially if Apple retains the notch.

Of course, as the design space for phones becomes more and more crowded, Apple’s room to maneuver becomes increasingly limited. The minimalist design space is winner-takes-all: the first company to elegantly remove a design facet wins the minimalism race, and now that Xiaomi has planted a flag in the bezel-less top space, it may be that Apple has no option but to sport the top-notch, or run the risk of being seen as copying a Chinese company’s design language.

Edit (added Feb 12, 17:43 SGT):

Several comments have been made about the iPhone X still having a greater amount of screen real estate than the Xiaomi Mix 2.

To clarify, the key point of the article isn’t about comparing active area. It’s about running out of options to place a sensor cluster, because the smartphone design space has gotten a lot more competitive. To spell it out explicitly, there are three main ways this can play out:

    1) Apple can’t hide a camera underneath the display, and so there always has to be a “dark area” that’s an affordance for the camera (and more significantly, the multitude of sensors that comprise FaceID).
    2) Apple (or perhaps someone else!) figures out how to hide a camera under the display and creates a true bezel-to-bezel phone.
    3) Apple convinces us all that the notch is truly iconic and it’s hailed as one of the greatest design innovations of this decade (hey, they did it for the headphone jack…).

In the case of 1 (Apple can’t hide the sensor cluster), these are their options:

    (a) Continue to push the top notch as iconic – status quo
    (b) Lose the notch by increasing top bezel area for sensor cluster — that’s “taking a step backward” – so not really an option
    (c) Move sensor cluster to the bottom, with no notch. This is copying the Xiaomi Mix 2 almost exactly – so not an option
    (d) Continue to push the notch as iconic, but put it on the bottom. Risks the top-half of the phone looking too much like a Xiaomi Mix 2 – so probably not an option

So in the race for minimalism, because Xiaomi has “claimed” the minimal bezel top-half design space, Apple has far fewer options for backing out of the notch, should it be perceived by the market as a loss of real estate, rather than a gain. But this is the world Apple has created for themselves, by patenting and litigating over the rounded rectangle as a phone design.

In the case of 2 (Apple figures out how to hide all the sensors), Apple can really win the minimalist design space if they can do it without reducing functionality. However, if they could have done this, I think they would have done it for the X. They certainly have the cash to throw the equivalent budget of SpaceX’s Falcon rocket program into eliminating that notch. Indeed, perhaps in a year or two Apple will come out with some crazy fiber optic wave guide assembly with holographic lenses to wrap light around the bezel into a sensor assembly stashed in the body of the phone. I wouldn’t put it beyond them.

But until then, it seems Apple is looking at option (1) for the next generation at least, and the point of this article is that the competition has robbed Apple of at least two options elegantly to back out of the notch and create a phone with greater appeal to markets like China.

by bunnie at February 11, 2018 12:30 PM

ICE40 for Novena

For any and all Novena users, a quick note: Philipp Gühring is organizing a production run of ICE40 FPGA add-in cards for people who want a 100% open software stack for making FPGA bitfiles. Register your interest in the production run by visiting his website!

by bunnie at February 11, 2018 09:08 AM

February 01, 2018

Bunnie Studios

Spectre/Meltdown Pits Transparency Against Liability: Which is More Important to You?

There is a lot of righteous anger directed toward Intel over CPU bugs that were revealed by Spectre/Meltdown. I agree that things could have been handled better, particularly with regards to transparency and the sharing of information among the relevant user communities that could have worked together to deploy effective patches in a timely fashion. People also aren’t wrong that consumer protection laws obligate manufacturers to honor warranties, particularly when a product is not fit for use as represented, if it contains defective material or workmanship, or fails to meet regulatory compliance.

However, as an open source hardware optimist, and someone who someday aspires to see more open source silicon on the market, I want to highlight that demanding Intel return, exchange, or offer rebates on CPUs purchased within a reasonable warranty period is entirely at odds with demands that Intel act with greater transparency in sharing bugs and source code.

Transparency is Easy When There’s No Penalty for Bugs

It’s taken as motherhood and apple pie in the open source software community that transparency leads to better products. The more eyes staring at a code base, the more bugs that can be found and patched. However, a crucial difference between open source software and hardware is that open source software carries absolutely no warranty. Even the most minimal, stripped down OSS licenses stipulate that contributors carry no liability. For example, the BSD 2-clause license has 189 words, of which 116 (60%) are dedicated to a “no warranty” clause – and all in caps, in case you weren’t paying attention. The no-warranty clause is so core to any open source license it doesn’t even count as a clause in the 2-clause license.

Of course contributors have no liability: this lack of liability is fundamental to open source. If people could sue you for some crappy code you you pushed to github years ago, why would you share anything? Github would be a ticking time bomb of financial ruin for every developer.

It’s also not about code being easier to patch than hardware. The point is that you don’t have to patch your code, even if you could. Someone can file a bug against you, and you have the legal right to ignore it. And if your code library happens to contain an overflow bug that results in a house catching fire, you walk away scot-free because your code came with no warranty of fitness for any purpose whatsoever.

Oohh, Shiny and New!

Presented a bin of apples, most will pick a blemish-free fruit from the bushel before heading to the check-out counter. Despite the knowing the reality of nature – that every fruit must grow from a blossom under varying conditions and hardships – we believe our hard-earned money should only go toward the most perfect of the lot. This feeling is so common sense that it’s codified in the form of consumer protection laws and compulsory warranties.

This psychology extends beyond obvious blemishes, to defects that have no impact on function. Suppose you’re on the market to buy a one-slot toaster. You’re offered two options: a one-slot toaster, and a two-slot toaster but with the left slot permanently and safely disabled. Both are exactly the same price. Which one do you buy?

Most people would buy the toaster with one slot, even though the net function might be identical to the two-slot version where one slot is disabled. In fact, you’d probably be infuriated and demand your money back if you bought the one-slot toaster, but opened the box to find a two-slot toaster with one slot disabled. We don’t like the idea of being sold goods that have anything wrong with them, even if the broken piece is irrelevant to performance of the device. It’s perceived as evidence of shoddy workmanship and quality control issues.

News Flash: Complex Systems are Buggy!

Hold your breath – I’d wager that every computer you’ve bought in the past decade has broken parts inside of them, almost exactly like the two-slot toaster with one slot permanently disabled. There’s the set of features that were intended to be in your chips – and there’s the subset of series of features that finally shipped. What happened to the features that weren’t shipped? Surely, they did a final pass on the chip to remove all that “dead silicon”.

Nope – most of the time those partially or non-functional units are simply disabled. This ranges from blocks of cache RAM, to whole CPU cores, to various hardware peripherals. Patching a complex chip design can cost millions of dollars and takes weeks or even months, so no company can afford to do a final “clean-up” pass to create a “perfect” design. To wit, manufacturers never misrepresent the product to consumers – if half the cache was available, the spec sheet would simply report the cache size as 128kB instead of 256kB. But surely some customers would have complained bitterly if they knew of the defect sold to them.

Despite being chock full of bugs, vendors of desktop CPUs or mobile phone System on Chips (SoCs) rarely disclose these bugs to users – and those that do disclose almost always disclose a limited list of public bugs, backed by an NDA-only list of all the bugs. The top two reasons cited for keeping chip specs secret are competitive advantage and liability, and I suspect in reality, it’s the latter that drives the secrecy, because the crappier the chipset, the more likely the specs are under NDA. Chip vendors are deathly afraid users will find inconsistencies between the chip’s actual performance and the published specs, thus triggering a recall event. This fear may seem more rational if you consider the magnitude of Intel’s FDIV bug recall ($475 million in 1994).


This is a pretty typical list of SoC bugs, known as “errata”. If your SoC’s errata is much shorter than this, it’s more likely due to bugs not being disclosed than there actually being less bugs.

If you Want Messages, Stop Shooting the Messengers

Highly esteemed and enlightened colleagues of mine are strongly of the opinion that Intel should reimburse end users for bugs found in their silicon; yet in the same breath, they complain that Intel has not been transparent enough. The point that has become clear to me is that consumers, even open-source activists, are very sensitive to imperfections, however minor. They demand a “perfect” machine; if they spend $500 on a computer, every part inside better damn well be perfect. And so starts the vicious cycle of hardware manufacturers hiding all sorts of blemishes and shortcomings behind various NDAs, enabling them to bill their goods as perfect for use.

You can’t have it both ways: the whole point of transparency is to enable peer review, so you can find and fix bugs more quickly. But if every time a bug is found, a manufacturer had to hand $50 to every user of their product as a concession for the bug, they would quickly go out of business. This partially answers the question why we don’t see open hardware much beyond simple breakout boards and embedded controllers: it’s far too risky from a liability standpoint to openly share the documentation for complex systems under these circumstances.

To simply say, “but hardware manufacturers should ship perfect products because they are taking my money, and my code can be buggy because it’s free of charge” – is naïve. A modern OS has tens of millions of lines of code, yet it benefits from the fact that every line of code can be replicated perfectly. Contrast to a modern CPU with billions of transistors, each with slightly different electrical characteristics. We should all be more surprised that it took so long for a major hardware bug to be found, than the fact that one was ever found.

Complex systems have bugs. Any system with primitives measured in the millions or billions – be it lines of code, rivets, or transistors – is going to have subtle, if not blatant, flaws. Systems simple enough to formally verify are typically too simple to handle real-world tasks, so engineers must rely on heuristics like design rules and lots and lots of hand-written tests.

There will be bugs.

Realities of the Open Hardware Business

About a year ago, I had a heated debate with a SiFive founder about how open they can get about their documentation. SiFive markets the RISC-V CPU, billed as an “open source CPU”, and many open source enthusiasts got excited about the prospect of a fully-open SoC that could finally eliminate proprietary blobs from the boot chain and ultimately through the same process of peer review found in the open source software world, yield a more secure, trustable hardware environment.

However, even one of their most ardent open-source advocates pushed back quite hard when I suggested they should share their pre-boot code. By pre-boot code, I’m not talking about the little ROM blob that gets run after reset to set up your peripherals so you can pull your bootloader from SD card or SSD. That part was a no-brainer to share. I’m talking about the code that gets run before the architecturally guaranteed “reset vector”. A number of software developers (and alarmingly, some security experts) believe that the life of a CPU begins at the reset vector. In fact, there’s often a significant body of code that gets executed on a CPU to set things up to meet the architectural guarantees of a hard reset – bringing all the registers to their reset state, tuning clock generators, gating peripherals, and so forth. Critically, chip makers heavily rely upon this pre-boot code to also patch all kinds of embarrassing silicon bugs, and to enforce binning rules.

The gentleman with whom I was debating the disclosure of pre-boot code adamantly held that it was not commercially viable to share the pre-boot code. I didn’t understand his point until I witnessed open-source activists en masse demanding their pound of flesh for Intel’s mistakes.

As engineers, we should know better: no complex system is perfect. We’ve all shipped bugs, yet when it comes to buying our own hardware, we individually convince ourselves that perfection is a reasonable standard.

The Choice: Truthful Mistakes or Fake Perfection?

The open source community could use the Spectre/Meltdown crisis as an opportunity to reform the status quo. Instead of suing Intel for money, what if we sue Intel for documentation? If documentation and transparency have real value, then this is a chance to finally put that value in economic terms that Intel shareholders can understand. I propose a bargain somewhere along these lines: if Intel releases comprehensive microarchitectural hardware design specifications, microcode, firmware, and all software source code (e.g. for AMT/ME) so that the community can band together to hammer out any other security bugs hiding in their hardware, then Intel is absolved of any payouts related to the Spectre/Meltdown exploits.

This also sets a healthy precedent for open hardware. In broader terms, my proposed open hardware bargain is thus: Here’s the design source for my hardware product. By purchasing my product, you’ve warranted that you’ve reviewed the available design source and decided the open source elements, as-is, are fit for your application. So long as I deliver a product consistent with the design source, I’ve met my hardware warranty obligation on the open source elements.

In other words, the open-source bargain for hardware needs to be a two-way street. The bargain I set forth above:

  • Rewards transparency with indemnity against yet-to-be-discovered bugs in the design source
  • Burdens any residual proprietary elements with the full liability of fitness for purpose
  • Simultaneously conserves a guarantee that a product is free from defects in materials and workmanship in either case

The beauty of this bargain is it gives a real economic benefit to transparency, which is exactly the kind of wedge needed to drive closed-source silicon vendors to finally share their full design documentation, with little reduction of consumer protection.

So, if we really desire a more transparent, open world in hardware: give hardware makers big and small the option to settle warranty disputes for documentation instead of cash.

Author’s Addendum (added Feb 2 14:47 SGT)
This post has 2 aspects to it:

The first is whether hardware makers will accept the offer to provide documentation in lieu of liability.

The second, and perhaps more significant, is whether you would make the offer for design documentation in lieu of design liability in the first place. It’s important that companies who choose transparency be given a measurable economic advantage over those who choose obscurity. In order for the vicious cycle of proprietary hardware to be broken, both consumer and producer have to express a willingness to value openness.

by bunnie at February 01, 2018 05:32 PM

January 29, 2018

Bunnie Studios

Name that Ware, January 2018

The Ware for January 2018 is shown below.

This side of the board might be a little too non-descript to make a solid guess, so if nobody gets it within a couple of weeks, I’ll push a picture of the other side.

Thanks to spida for handing me pictures of this well-photographed ware at 34C3!

by bunnie at January 29, 2018 11:00 AM

Winner, Name that Ware December 2017

The winner of Name that Ware Dec 2017 is Piotr! Congrats, email me for your prize. He nailed it very quickly as a 2000-series multimeter by Keithley. I figured since this is a classic meter found on many engineer’s benches, it would be named fairly quickly, even with a tightly cropped shot of the main board.

My decades-old Keithley 2000 finally gave up the ghost; something seems wrong in the input stage that’s preventing it from both doing autoranging and measuring negative voltages. A quick thermal scan showed a few transistors getting hot that shouldn’t be, and the self-test codes indicate that something may have gone very wrong with the core ADC pathway. I’ve set it aside to look at later and ended up picking up a Keithley 2110 as a stand-in replacement until I can afford a DMM7510. There’s some pretty nifty debugging tricks you can do with a 7.5-digit multimeter (it’s sensitive enough to differentiate between code execution paths on an MCU, or observe a battery’s self-discharge rate in real time), but at the moment I simply can’t afford one. Ah well — it’s always good to have goals!

by bunnie at January 29, 2018 11:00 AM

January 22, 2018

Dieter Spaar

Modify the VoIP provider of a Speedport ISDN Adapter

The Speedport ISDN Adapter is a relatively cheap VoIP-to-ISDN adapter from Deutsche Telekom. The drawback is that the adapter is "locked" to Deutsche Telekom and the user interface (web interface) is disabled.

Here is how you can access the web interface. While the adapter is already powered on, you have to press both the "Register" and "Reset" button at once for more than 20 seconds. This will temporarily enable a still limited web interface, just point your browser to the IP address of your ISDN adapter. The login password is the "device password" written on the bottom side of the case.

To fully enable the web interface you have to do a bit more. I use "curl" which is available for nearly all operating systems. You have to replace "12345678" with the device password and "192.168.1.2" with the IP address of your adapter.

curl -d "login_pwd=1&pws=12345678" "http://192.168.1.2/cgi-bin/login.cgi"
curl -d "password=12345678&debug_enable=1&uart_tx_eb=1" "http://192.168.1.2/cgi-bin/debug.cgi"

Now the web interface is fully functional and you can modify the settings, e.g. disable the TR-069 interface and enter the parameters for the VoIP account of your provider. "uart_tx_eb" enables the serial console, which offers a few debug commands. However you have to open the case to get access to the serial console.

January 22, 2018 01:00 AM

January 21, 2018

Dieter Spaar

More LTE Base Stations

Since running my first own LTE eNodeB in 2013, I acquired a few more. I now have one from NSN, Ericsson and Huawei. All of them are fully functional including the required Remote Radio Heads to actually transmit.

Operating an LTE eNodeB is not very complicated, these days it is even easier with software like NextEPC. The tricky part is setting up and configuring the LTE eNodeB because there is no standard to do so and every manufacturer has its own way. If you are lucky, you might get an already configured system and there is not much you have to adjust. If your system is new or the configuration has been erased, than it can get complicated, at least for the Ericsson eNodeB I have.

January 21, 2018 01:00 AM

December 31, 2017

Harald Welte

Osmocom Review 2017

As 2017 has just concluded, let's have a look at the major events and improvements in the Osmocom Cellular Infrastructure projects (i.e. those projects dealing with building protocol stacks and network elements for mobile network infrastructure.

I've prepared a detailed year 2017 summary at the osmocom.org website, but let me write a bit about the most note-worthy topics here.

NITB Split

Once upon a time, we implemented everything needed to operate a GSM network inside a single process called OsmoNITB. Those days are now gone, and we have separate OsmoBSC, OsmoMSC, OsmoHLR, OsmoSTP processes, which use interfaces that are interoperable with non-Osmocom implementations (which is what some of our users require).

This change is certainly the most significant change in the close-to-10-year history of the project. However, we have tried to make it as non-intrusive as possible, by using default point codes and IP addresses which will make the individual processes magically talk to each other if installed on a single machine.

We've also released a OsmoNITB Migration Guide, as well as our usual set of user manuals in order to help our users.

We'll continue to improve the user experience, to re-introduce some of the features lost in the split, such as the ability to attach names to the subscribers.

Testing

We have osmo-gsm-tester together with the two physical setups at the sysmocom office, which continuously run the latest Osmocom components and test an entire matrix of different BTSs, software configurations and modems. However, this tests at super low load, and it tests only signalling so far, not user plane yet. Hence, coverage is limited.

We also have unit tests as part of the 'make check' process, Jenkins based build verification before merging any patches, as well as integration tests for some of the network elements in TTCN-3. This is much more than we had until 2016, but still by far not enough, as we had just seen at the fall-out from the sub-optimal 34C3 event network.

OsmoCon

2017 also marks the year where we've for the first time organized a user-oriented event. It was a huge success, and we will for sure have another OsmoCon incarnation in 2018 (most likely in May or June). It will not be back-to-back with the developer conference OsmoDevCon this time.

SIGTRAN stack

We have a new SIGTRAN stakc with SUA, M3UA and SCCP as well as OsmoSTP. This has been lacking a long time.

OsmoGGSN

We have converted OpenGGSN into a true member of the Osmocom family, thereby deprecating OpenGGSN which we had earlier adopted and maintained.

by Harald Welte at December 31, 2017 11:00 PM

34C3 and its Osmocom GSM/UMTS network

At the 34th annual Chaos Communication Congress, a team of Osmocom folks continued the many years old tradition of operating an experimental Osmocom based GSM network at the event. Though I've originally started that tradition, I'm not involved in installation and/or operation of that network, all the credits go to Lynxis, neels, tsaitgaist and the larger team of volunteers surrounding them. My involvement was only to answer the occasional technical question and to look at bugs that show up in the software during operation, and if possible fix them on-site.

34C3 marks two significant changes in terms of its cellular network:

  • the new post-nitb Osmocom stack was used, with OsmoBSC, OsmoMSC and OsmoHLR
  • both an GSM/GPRS network (on 1800 MHz) was operated ,as well as (for the first time) an UMTS network (in the 850 MHz band)

The good news is: The team did great work building this network from scratch, in a new venue, and without relying on people that have significant experience in network operation. Definitely, the team was considerably larger and more distributed than at the time when I was still running that network.

The bad news is: There was a seemingly endless number of bugs that were discovered while operating this network. Some shortcomings were known before, but the extent and number of bugs uncovered all across the stack was quite devastating to me. Sure, at some point from day 2 onwards we had a network that provided [some level of] service, and as far as I've heard, some ~ 23k calls were switched over it. But that was after more than two days of debugging + bug fixing, and we still saw unexplained behavior and crashes later on.

This is such a big surprise as we have put a lot of effort into testing over the last years. This starts from the osmo-gsm-tester software and continuously running test setup, and continues with the osmo-ttcn3-hacks integration tests that mainly I wrote during the last few months. Both us and some of our users have also (successfully!) performed interoperability testing with other vendors' implementations such as MSCs. And last, but not least, the individual Osmocom developers had been using the new post-NITB stack on their personal machines.

So what does this mean?

  • I'm sorry about the sub-standard state of the software and the resulting problems we've experienced in the 34C3 network. The extent of problems surprised me (and I presume everyone else involved)
  • I'm grateful that we've had the opportunity to discover all those bugs, thanks to the GSM team at 34C3, as well as Deutsche Telekom for donating 3 ARFCNs from their spectrum, as well as the German regulatory authority Bundesnetzagentur for providing the experimental license in the 850 MHz spectrum.
  • We need to have even more focus on automatic testing than we had so far. None of the components should be without exhaustive test coverage on at least the most common transactions, including all their failure modes (such as timeouts, rejects, ...)

My preferred method of integration testing has been by using TTCN-3 and Eclipse TITAN to emulate all the interfaces surrounding a single of the Osmocom programs (like OsmoBSC) and then test both valid and invalid transactions. For the BSC, this means emulating MS+BTS on Abis; emulating MSC on A; emulating the MGW, as well as the CTRL and VTY interfaces.

I currently see the following areas in biggest need of integration testing:

  • OsmoHLR (which needs a GSUP implementation in TTCN-3, which I've created on the spot at 34C3) where we e.g. discovered that updates to the subscriber via VTY/CTRL would surprisingly not result in an InsertSubscriberData to VLR+SGSN
  • OsmoMSC, particularly when used with external MNCC handlers, which was so far blocked by the lack of a MNCC implementation in TTCN-3, which I've been working on both on-site and after returning back home.
  • user plane testing for OsmoMGW and other components. We currently only test the control plane (MGCP), but not the actual user plane e.g. on the RTP side between the elements
  • UMTS related testing on OsmoHNBGW, OsmoMSC and OsmoSGSN. We currently have no automatic testing at all in these areas.

Even before 34C3 and the above-mentioned experiences, I concluded that for 2018 we will pursue a test-driven development approach for all new features added by the sysmocom team to the Osmocom code base. The experience with the many issues at 34C3 has just confirmed that approach. In parallel, we will have to improve test coverage on the existing code base, as outlined above. The biggest challenge will of course be to convince our paying customers of this approach, but I see very little alternative if we want to ensure production quality of our cellular stack.

So here we come: 2018, The year of testing.

by Harald Welte at December 31, 2017 11:00 PM

Michele's GNSS blog

One year in review


As 2017 heads to the end, it calls for a retrospective. Which events characterized this year most?
Both most prominent magazines in the domain (gpsworld.com and insidegnss.com) dedicate a piece of their December issue to this very topic,  and one can find there their point of view which I hope to complement here. Let's begin from GNSS itself, in no order what-so-ever.

GPS.
Probably one of the most difficult years for the Navstar constellation, with no satellite launched and OCX still not yet. Yes the US maintains the gold standard of satellite radio-navigation but it's hard to predict if and how long that will hold true. In the congress itself there is strong controversy on how adequate GPS is for 21th century warfare. As vulnerabilities of SIS against jamming and spoofing became evident to everyone, and as Navstar satellites continue to exceed their life expectancy, most of the R&D in the US was dedicated to PTA (Protect, Toughen and Augment), with little room for modernization. Yet, despite reading about the first GPSIII satellites readiness and even a new GPSIIIF generation in the works, it all sounded to me quite like a collection of forward-looking statements and promotional material.

Galileo.
Europe managed to declare initial services and launch 4 satellites this year. On the web it is not difficult to gather proof that Worldwide acceptance of the EU system is growing... many GNSS Companies have now claimed compatibility of their services with Galileo, and even penetration in smartphone chipsets became a reality this year (I was already involved with BQ and its Qualcomm IZAT 8C last year). We will not see another quadruple launch for a few more months now, but in the meantime there will be ~20 quadruple-open-frequency birds to work with in 2018. As resilience is developed on the GPS side, modernization and new signal multi-carrier processing techniques are worked on in the old continent.

Beidou.
The Chinese government promised 6 BDS3 satellites this year, later de-scoped to 4 and finally managed to launch 2 last November. Not discouraging though, as the 4 gone missing this year are queued up for flying in early 2018 ...together with another 12. Yes, 16 in total. If even half of that promise is kept, it would put Beidou on the top of the list of GNSS to watch out for. It feels like Beidou is at a cross-roads as new generation birds don't appear to broadcast B2 (the 2 MHz signal on E5b), but just B1. So multi-frequency users will have two heterogeneous groups to deal with: BDS2 with B1+B2 (BPSK2 on 1526f0 and 1180f0) and BDS3 with the legacy B1 plus B1C (TMBOC at 1540f0) and B2a (BPSK10 at 1150f0). Understandably there isn't much clarity towards foreign users on which signal/services combinations China will continue to support or slowly discontinue.

Glonass.
I cannot help but not being very excited about Glonass. Russia launched one satellite in September this year. The system has greatly improved, but FDMA signals at frequencies far away from the GPS ones enough to need a separate RF down-conversion chain do not sound fun to me. However, many more launches (18 according to this source?) are scheduled for 2018, with as many as 5 K2 generation (full CDMA) birds. That yes would change my mind. Right now, with codes already just half as accurate as GPS L1C/A, L1OF/L2OF to me looks only useful for high-sensitivity urban navigation. Not to be mistaken, it's all very good but just a great headache of biases for precision navigation. Glonass modernization is about switching to CDMA signals and new frequencies (L1+E5b on K2, E5a on KM), but it could be a long time before there will be enough satellites to make that constellation a concrete option.

QZSS.
Japan quietly launched 3 satellites this year. QZSS is often overlooked as a purely regional system, versus its global counterparts. But each satellite carries an advanced navigation payload, capable of many frequencies and signals, not to mention services on top of them. So Japan has now a regional constellation of GPSIII-like satellites which are almost always in view. That comes with great benefit for users in the Asia-pacific region as each QZSS satellite is worth approximately 4+ GPSIIF MEO birds. So effectively multi-frequency users have the equivalent of a FOC of GPSIIF - which is pretty cool. I encourage everyone to follow Tomoji's and Takuji's blogs for more hacker friendly info on QZSS.

Others.
I followed on the side India's Navic and Australia's efforts towards a SBAS, but I am not much up to date with those or other systems, so feel free to share with me updates and links in the comments section and I will try to integrate them.

System of systems 2017 wrap-up.
Many ask questions about GNSS adoption and especially the future roles of Galileo, Beidou and Glonass. There are always unknowns of course, but to me the answer is clear:  Beidou3 is (involuntarily?) Galileo's best mate and Glonass will eventually align. The new Beidou English ICD is out and all bets are off: BDS3 promotes a dual frequency L1C+L5 open service. Navstar's L2C seems already old, and yet the user segment adoption of it is largely immature. Most services and receivers still rely on L2P(Y) as 19 satellites aren't enough to justify switching across to the civil signal, despite its many benefits over semi-codeless tracking of the encrypted military one. The GPSIIF L5 signal is affected by line biases which make PPP difficult, but Galileo is free of them if my results are correct. Incidentally I have not yet tried with BDS3 satellites... it's slightly more tricky without a third open frequency. When the legacy P(Y) signals are turned off we might witness a shift between GPS L1C/A + L2P to GPS/Galileo/Beidou L1 + L5 before L2C has a chance of picking up universally. Each system will offer a third open frequency: GPS L2C, Galileo/BDS3 E6/E5b and Glonass L3.


Industry news.
Trying to be as unbiased as possible, the GNSS industry also had some interesting developments this year. Pure GNSS looks slowly disappearing as the low-end of the GNSS offering is nowadays largely owned by the smartphone industry. Multi-aggregated-band LTE and WiFI are more complex than GNSS from a DSP perspective so understandably the latter is considered a commodity. As general purpose modems have multiple wireless capabilities, they can easily do single frequency all-constellation tracking. Navigation is provided through a fusion of technologies and sensors, where GNSS plays the poorest availability highest accuracy one. While Mediatek has been quiet for a while now, maybe reaping the results of its flagship MTK3333, others have moved forward. HED Navigation is continuing to sell through locosys.com its high-sensitivity all-constellation single frequency engine. u-blox released this year its 8-th generation silicon for the IoT/wearables market as well as a single-frequency RTK-capable chipset (the M8P line). Geostar-navigation also came out with a single frequency RTK capable engine associated to its last GeoS-5MR baseband. NVS released its sub-1K$ multi-constellation dual-frequency receiver and we, at Swift Navigation, continue to release FW updates enabling new features of the Piksi Multi. Last but not least, Broadcom stepped on the high-precision field releasing a multi-frequency multi-constellation chip the BCM4775x. I am glad to see I had anticipated this some time ago, and my vision has materialized.


A little give-back
As prof. Borre disappeared this year and I had offered him my SdrNav40 sources as academic material for his new book, I think it's appropriate to open-source that design as promised.
Here is an archive with the 4-channels front-end HW design:
https://drive.google.com/open?id=1QCFZQDv7fTunZxiyiULaOhVfQj2DNdbp
Here the SW for Linux and Windows:
https://drive.google.com/open?id=1PjwSCmO3ZRsHWPlxo8tptu41s7bza_1Q
https://drive.google.com/open?id=1Vqp5zocYVbbohhXGh-bgXhh1cB4N4ncn
It's an old design, not nearly as polished as Peter Monta's firehose.. but it's done an egregious job for me in lots of personal research projects. Feel free to ask questions if needed.


つづく

by noreply@blogger.com (Michele Bavaro) at December 31, 2017 04:13 PM

December 26, 2017

Bunnie Studios

Name that Ware December 2017

The Ware for December 2017 is shown below.

I’ve partially cropped the photo to make it a bit more challenging, but I have a feeling this will be guessed … rather quickly … despite the impediment.

It’s a real beauty on the inside, isn’t it?

by bunnie at December 26, 2017 03:13 PM

Winner, Name that Ware November 2017

The Ware for November 2017 is the “Front Panel Display Board” from an Intel Paragon supercomputer. Many moons ago one of these was being decommissioned at MIT and I raided the cabinet for interesting-looking parts. I snagged a ginormous 5V power supply (the monster could pump out 400A @ 5V) and one of these front panel LED boards. As alluded to by some of the comments, this is the sort of LED board that gets designed when money is not an issue.

Nobody got quite close enough on this one to call a winner, so again this month we have a winner-less name that ware. I have a feeling, however, that next month’s should be a cinch :)

by bunnie at December 26, 2017 03:13 PM

December 12, 2017

Altus Metrum

Altos1.8.3

AltOS 1.8.3 — TeleMega version 3.0 support and bug fixes

Bdale and I are pleased to announce the release of AltOS version 1.8.3.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, STMF042, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a minor release of AltOS, including support for our new TeleMega v3.0 board and a selection of bug fixes

Announcing TeleMega v3.0

TeleMega is our top of the line flight computer with 9-axis IMU, 6 pyro channels, uBlox Max 7Q GPS and 40mW telemetry system. Version 3.0 is feature compatible with version 2.0, incorporating a new higher-perfomance 9-axis IMU in place of the former 6-axis IMU and separate 3-axis magnetometer.

AltOS 1.8.3

In addition to support for TeleMega v3.0 boards, AltOS 1.8.3 contains some important bug fixes for all flight computers. Users are advised to upgrade their devices.

  • Ground testing EasyMega and TeleMega additional pyro channels could result in a sticky 'fired' status which would prevent these channels from firing on future flights.

  • Corrupted flight log records could prevent future flights from capturing log data.

  • Fixed saving of pyro configuration that ended with 'Descending'. This would cause the configuration to revert to the previous state during setup.

The latest AltosUI and TeleGPS applications have improved functionality for analyzing flight data. The built-in graphing capabilities are improved with:

  • Graph lines have improved appearance to make them easier to distinguish. Markers may be placed at data points to show captured recorded data values.

  • Graphing offers the ability to adjust the smoothing of computed speed and acceleration data.

Exporting data for other applications has some improvements as well:

  • KML export now reports both barometric and GPS altitude data to make it more useful for Tripoli record reporting.

  • CSV export now includes TeleMega/EasyMega pyro voltages and tilt angle.

by keithp's rocket blog at December 12, 2017 05:44 PM

December 04, 2017

Free Electrons

Feedback from the Netdev 2.2 conference

The Netdev 2.2 conference took place in Seoul, South Korea. As we work on a diversity of networking topics at Free Electrons as part of our Linux kernel contributions, Free Electrons engineers Alexandre Belloni and Antoine Ténart went to Seoul to attend lots of interesting sessions and to meet with the Linux networking community. Below, they report on what they learned from this conference, by highlighting two talks they particularly liked.

Linux Networking Dietary Restrictions — slides

David S. Miller gave a keynote about reducing the size of core structures in the Linux kernel networking core. The idea behind his work is to use smaller structures which has many benefits in terms of performance as less cache misses will occur and less memory resources are needed. This is especially true in the networking core as small changes may have enormous impacts and improve performance a lot. Another argument from his maintainer hat perspective is the maintainability, where smaller structures usually means less complexity.

He presented five techniques he used to shrink the networking core data structures. The first one was to identify members of common base structures that are only used in sub-classes, as these members can easily be moved out and not impact all the data paths.

The second one makes use of what David calls “state compression”, aka. understanding the real width of the information stored in data structures and to pack flags together to save space. In his mind a boolean should take a single bit whereas in the kernel it requires way more space than that. While this is fine for many uses it makes sense to compress all these data in critical structures.

Then David S. Miller spoke about unused bits in pointers where in the kernel all pointers have 3 bits never used. He argued these bits are 3 boolean values that should be used to reduce core data structure sizes. This technique and the state compression one can be used by introducing helpers to safely access the data.

Another technique he used was to unionize members that aren’t used at the same time. This helps reducing even more the structure size by not having areas of memory never used during identified steps in the networking stack.

Finally he showed us the last technique he used, which was using lookup keys instead of pointers when the objects can be found cheaply based on their index. While this cannot be used for every object, it helped reducing some data structures.

While going through all these techniques he gave many examples to help understanding what can be saved and how it was effective. This was overall a great talk showing a critical aspect we do not always think of when writing drivers, which can lead to big performance improvements.

David S. Miller at Nedev 2.2

WireGuard: Next-generation Secure Kernel Network Tunnel — slides

Jason A. Donenfeld presented his new and shiny L3 network tunneling mechanism, in Linux. After two years of development this in-kernel formally proven cryptographic protocol is ready to be submitted upstream to get the first rounds of review.

The idea behind Wireguard is to provide, with a small code base, a simple interface to establish and maintain encrypted tunnels. Jason made a demo which was impressive by its simplicity when securely connecting two machines, while it can be a real pain when working with OpenVPN or IPsec. Under the hood this mechanism uses UDP packets on top of either IPv4 and IPv6 to transport encrypted packets using modern cryptographic principles. The authentication is similar to what SSH is using: static private/public key pairs. One particularly nice design choice is the fact that Wireguard is exposed as a stateless interface to the administrator whereas the protocol is stateful and timer based, which allow to put devices into sleep mode and not to care about it.

One of the difficulty to get Wireguard accepted upstream is its cryptographic needs, which do not match what can provide the kernel cryptographic framework. Jason knows this and plan to first send patches to rework the cryptographic framework so that his module nicely integrates with in-kernel APIs. First RFC patches for Wireguard should be sent at the end of 2017, or at the beginning of 2018.

We look forward to seeing Wireguard hit the mainline kernel, to allow everybody to establish secure tunnels in an easy way!

Jason A. Donenfeld at Netdev 2.2

Conclusion

Netdev 2.2 was again an excellent experience for us. It was an (almost) single track format, running alongside the workshops, allowing to not miss any session. The technical content let us dive deeply in the inner working of the network stack and stay up-to-date with the current developments.

Thanks for organizing this and for the impressive job, we had an amazing time!

by Antoine Ténart at December 04, 2017 10:25 AM

November 26, 2017

Bunnie Studios

Name that Ware November 2017

The Ware for November 2017 is shown below.

Happy holidays to everyone!

by bunnie at November 26, 2017 09:25 PM

Solution, Name that Ware October 2017

It’s unusual to find a ware without a clear winner, and reading through the comment thread I found a lot of near-misses but none of them close enough for me to declare a winner.

The Ware for October 2017 is a Minibar Systems automated hotel minibar (looks like a “SmartCube 40i”, “The Minibar of the Future”). During an overnight layover, I decided to check the minibar for snack options, but upon pulling what I thought was the handle for the fridge, lo and behold a tray of electronics presented itself. My friend, extremely amused by my enthusiastic reaction, snapped this picture of me adding the ware to my catalog:

by bunnie at November 26, 2017 09:25 PM

November 21, 2017

Free Electrons

Back from ELCE: award to Free Electrons CEO Michael Opdenacker

The Embedded Linux Conference Europe 2017 took place at the end of October in Prague. We already posted about this event by sharing the slides and videos of Free Electrons talks and later by sharing our selection of talks given by other speakers.

During the closing session of this conference, Free Electrons CEO Michael Opdenacker has received from the hands of Tim Bird, on behalf of the ELCE committee, an award for its continuous participation to the Embedded Linux Conference Europe. Indeed, Michael has participated to all 11 editions of ELCE, with no interruption. He has been very active in promoting the event, especially through the video recording effort that Free Electrons did in the early years of the conference, as well as through the numerous talks given by Free Electrons.

Michael Opdenacker receives an award at the Embedded Linux Conference Europe

Free Electrons is proud to see its continuous commitment to knowledge sharing and community participation be recognized by this award!

by Thomas Petazzoni at November 21, 2017 12:49 PM

November 20, 2017

Free Electrons

Linux 4.14 released, Free Electrons contributions

Penguin from Mylène Josserand

Drawing from Mylène Josserand,
based on a picture from Samuel Blanc under CC-BY-SA

Linux 4.14, which is going to become the next Long Term Supported version, has been released a week ago by Linus Torvalds. As usual, LWN.net did an interesting coverage of this release cycle merge window, highlighting the most important changes: The first half of the 4.14 merge window and The rest of the 4.14 merge window.

According to Linux Kernel Patch statistics, Free Electrons contributed 111 patches to this release, making it the 24th contributing company by number of commits: a somewhat lower than usual contribution level from our side. At least, Free Electrons cannot be blamed for trying to push more code into 4.14 because of its Long Term Support nature! 🙂

The main highlights of our contributions are:

  • On the RTC subsystem, Alexandre Belloni made as usual a number of fixes and improvements to various drivers, especially the ds1307 driver.
  • On the NAND subsystem, Boris Brezillon did a number of small improvements in various areas.
  • On the support for Marvell platforms
    • Antoine Ténart improved the ppv2 network driver used by the Marvell Armada 7K/8K SoCs: support for 10G speed and TSO support are the main highlights. In order to support 10G speed, Antoine added a driver in drivers/phy/ to configure the common PHYs in the Armada 7K/8K SoCs.
    • Thomas Petazzoni also improved the ppv2 network driver by adding support for TX interrupts and per-CPU RX interrupts.
    • Grégory Clement contributed some patches to enable NAND support on Armada 7K/8K, as well as a number of fixes in different areas (GPIO fix, clock handling fixes, etc.)
    • Miquèl Raynal contributed a fix for the Armada 3700 SPI controller driver.
  • On the support for Allwinner platforms
    • Maxime Ripard contributed the support for a new board, the BananaPI M2-Magic. Maxime also contributed a few fixes to the Allwinner DRM driver, and a few other misc fixes (clock, MMC, RTC, etc.).
    • Quentin Schulz contributed the support for the power button functionality of the AXP221 (PMIC used in several Allwinner platforms)
  • On the support for Atmel platforms, Quentin Schulz improved the clock drivers for this platform to properly support the Audio PLL, which allowed to fix the Atmel audio drivers. He also fixed suspend/resume support in the Atmel MMC driver to support the deep sleep mode of the SAMA5D2 processor.

In addition to making direct contributions, Free Electrons is also involved in the Linux kernel development by having a number of its engineers act as Linux kernel maintainers. As part of this effort, Free Electrons engineers have reviewed, merged and sent pull requests for a large number of contributions from other developers:

  • Boris Brezillon, as the NAND subsystem maintainer and MTD subsystem co-maintainer, merged 68 patches from other developers.
  • Alexandre Belloni, as the RTC subsystem maintainer and Atmel ARM platform co-maintainer, merged 32 patches from other developers.
  • Grégory Clement, as the Marvell ARM platform co-maintainer, merged 29 patches from other developers.
  • Maxime Ripard, as the Allwinner ARM platform co-maintainer, merged 18 patches from other developers.

This flow of patches from kernel maintainers to other kernel maintainers is also nicely described for the 4.14 release by the Patch flow into the mainline for 4.14 LWN.net article.

The detailed list of our contributions:

by Thomas Petazzoni at November 20, 2017 12:59 PM

November 16, 2017

Free Electrons

Mender: How to integrate an OTA updater

Recently, our customer Senic asked us to integrate an Over-The-Air (OTA) mechanism in their embedded Linux system, and after some discussion, they ended up chosing Mender. This article will detail an example of Mender’s integration and how to use it.

What is Mender?

Mender is an open source remote updater for embedded devices. It is composed of a client installed on the embedded device, and a management server installed on a remote server. However, the server is not mandatory as Mender can be used standalone, with updates triggered directly on the embedded device.

Image taken from Mender’s website

In order to offer a fallback in case of failure, Mender uses the double partition layout: the device will have at least 2 rootfs partitions, one active and one inactive. Mender will deploy an update on the inactive partition, so that in case of an error during the update process, it will still have the active partition intact. If the update succeeds, it will switch to the updated partition: the active partition becomes inactive and the inactive one becomes the new active. As the kernel and the device tree are stored in the /boot folder of the root filesystem, it is possible to easily update an entire system. Note that Mender needs at least 4 partitions:

  • bootloader partition
  • data persistent partition
  • rootfs + kernel active partition
  • rootfs + kernel inactive partition

It is, of course, customizable if you need more partitions.

Two reference devices are supported: the BeagleBone Black and a virtual device. In our case, the board was a Nanopi-Neo, which is based on an Allwinner H3.

Mender provides a Yocto Project layer containing all the necessary classes and recipes to make it work. The most important thing to know is that it will produce an image ready to be written to an SD card to flash empty boards. It will also produce “artifacts” (files with .mender extension) that will be used to update an existing system.

Installation and setup

In this section, we will see how to setup the Mender client and server for your project. Most of the instructions are taken from the Mender documentation that we found well detailed and really pleasant to read. We’ll simply summarize the most important steps.

Server side

The Mender server will allow you to remotely update devices. The server can be installed in two modes:

  • demo mode: Used to test a demo server. It can be nice to test it if you just want to quickly deploy a Mender solution, for testing purpose only. It includes a demo layer that simplify and configure for you a default Mender server on localhost of your workstation.
  • production mode: Used for production. We will focus on this mode as we wanted to use Mender in a production context. This mode allows to customize the server configuration: IP address, certificates, etc. Because of that, some configuration will be necessary (which is not the case in the demo mode).

In order to install the Mender server, you should first install Docker CE and Docker Compose. Have a look at the corresponding Docker instructions.

Setup

  • Download the integration repository from Mender:
  • $ git clone https://github.com/mendersoftware/integration mender-server
    
  • Checkout 1.1.0 tag (latest version at the moment of the test)
  • $ cd mender-server
    $ git checkout 1.1.0 -b my-production-setup
    
  • Copy the template folder and update all the references to “template”
  • $ cp -a template production
    $ cd production
    $ sed -i -e 's#/template/#/production/#g' prod.yml
    
  • Download Docker images
  • $ ./run pull
    
  • Use the keygen script to create certificates for domain names (e.g. mender.foobar.com and s3.foobar.com)
  • $ CERT_API_CN=mender.foobar.com CERT_STORAGE_CN=s3.foobar.com ../keygen
    
  • Some persistent storage will be needed by Mender so create a few Docker volumes:
  • $ docker volume create --name=mender-artifacts
    $ docker volume create --name=mender-deployments-db
    $ docker volume create --name=mender-useradm-db
    $ docker volume create --name=mender-inventory-db
    $ docker volume create --name=mender-deviceadm-db
    $ docker volume create --name=mender-deviceauth-db
    

Final configuration

This final configuration will link the generated keys with the Mender server. All the modifications will be in the prod.yml file.

  • Locate the storage-proxy service in prod.yml and set it to your domain name. In our case s3.foobar.com under the networks.mender.aliases
  • Locate the minio service. Set MINIO_ACCESS_KEY to “mender-deployments” and the MINIO_SECRET_KEY to a generated password (with e.g.: $ apg -n1 -a0 -m32)
  • Locate the mender-deployments service. Set DEPLOYMENTS_AWS_AUTH_KEY and DEPLOYMENTS_AWS_AUTH_SECRET to respectively the value of MINIO_ACCESS_KEY and MINIO_SECRET_KEY. Set DEPLOYMENTS_AWS_URI to point to your domain such as https://s3.foobar.com:9000

Start the server

Make sure that the domain names you have defined (mender.foobar.com and s3.foobar.com) are accessible, potentially by adding them to /etc/hosts if you’re just testing.

  • Start the server
  • $ ./run up -d
    
  • If it is a new installation, request initial user login:
  • $ curl -X POST  -D - --cacert keys-generated/certs/api-gateway/cert.crt https://mender.foobar.com:443/api/management/v1/useradm/auth/login
    
  • Check that you can create a user and login to mender UI:
  •  $ firefox http://mender.foobar.com:443 

Client side – Yocto Project

Mender has a Yocto Project layer to easily interface with your own layer.
We will see how to customize your layer and image components (U-Boot, Linux kernel) to correctly configure it for Mender use.

In this section, we will assume that you have your own U-Boot and your own kernel repositories (and thus, recipes) and that you retrieved the correct branch of this layer.

Machine and distro configurations

  • Make sure that the kernel image and Device Tree files are installed in the root filesystem image
  • RDEPENDS_kernel-base += "kernel-image kernel-devicetree"
    
  • Update the distro to inherit the mender-full class and add systemd as the init manager (we only tested Mender’s integration with systemd)
  • # Enable systemd for Mender
    DISTRO_FEATURES_append = " systemd"
    VIRTUAL-RUNTIME_init_manager = "systemd"
    DISTRO_FEATURES_BACKFILL_CONSIDERED = "sysvinit"
    VIRTUAL-RUNTIME_initscripts = ""
    
    INHERIT += "mender-full"
    
  • By default, Mender assumes that your storage device is /dev/mmcblk0, that mmcblk0p1 is your boot partition (containing the bootloader), that mmcblk0p2 and mmcblk0p3 are your two root filesystem partitions, and that mmcblk0p5 is your data partition. If that’s the case for you, then everything is fine! However, if you need a different layout, you need to update your machine configuration. Mender’s client will retrieve which storage device to use by using the MENDER_STORAGE_DEVICE variable (which defaults to mmcblk0). The partitions themselves should be specified using MENDER_BOOT_PART, MENDER_ROOTFS_PART_A, MENDER_ROOTFS_PART_B and ROOTFS_DATA_PART. If you need to change the default storage or the partitions’ layout, edit in your machine configuration the different variables according to your need. Here is an example for /dev/sda:
  • MENDER_STORAGE_DEVICE = "/dev/sda"
    MENDER_STORAGE_DEVICE_BASE = "${MENDER_STORAGE_DEVICE}"
    MENDER_BOOT_PART = "${MENDER_STORAGE_DEVICE_BASE}1"
    MENDER_ROOTFS_PART_A = "${MENDER_STORAGE_DEVICE_BASE}2"
    MENDER_ROOTFS_PART_B = "${MENDER_STORAGE_DEVICE_BASE}3"
    MENDER_DATA_PART = "${MENDER_STORAGE_DEVICE_BASE}5"
    
  • Do not forget to update the artifact name in your local.conf, for example:
  • MENDER_ARTIFACT_NAME = "release-1"
    

    As described in Mender’s documentation, Mender will store the artifact name in its artifact image. It must be unique which is what we expect because an artifact will represent a release tag or a delivery. Note that if you forgot to update it and upload an artifact with the same name as an existing in the web UI, it will not be taken into account.

    U-Boot configuration tuning

    Some modifications in U-Boot are necessary to be able to perform the rollback (use a different partition after an unsuccessful update)

    • Mender needs BOOTCOUNT support in U-Boot. It creates a bootcount variable that will be incremented each time a reboot appears (or reset to 1 after a power-on reset). Mender will use this variable in its rollback mechanism.
      Make sure to enable it in your U-Boot configuration. This will most likely require a patch to your board .h configuration file, enabling:
    • #define CONFIG_BOOTCOUNT_LIMIT
      #define CONFIG_BOOTCOUNT_ENV
      
    • Remove environment variables that will be redefined by Mender. They are defined in Mender’s documentation.
    • Update your U-Boot recipe to inherit Mender’s one and make sure to provide U-Boot virtual package (using PROVIDES)
    • # Mender integration
      require recipes-bsp/u-boot/u-boot-mender.inc
      PROVIDES += "u-boot"
      RPROVIDES_${PN} += "u-boot"
      BOOTENV_SIZE = "0x20000"
      

      The BOOTENV_SIZE must be set the same content as the U-Boot CONFIG_ENV_SIZE variable. It will be used by the u-boot-fw-utils tool to retrieve the U-Boot environment variables.

      Mender is using u-boot-fw-utils so make sure that you have a recipe for it and that Mender include’s file is included. To do that, you can create a bbappend file on the default recipe or create your own recipe if you need a specific version. Have a look at Mender’s documentation example.

    • Tune your U-Boot environment to use Mender’s variables. Here are some examples of the modifications to be done. Set the root= kernel argument to use ${mender_kernel_root}, set the bootcmd to load the kernel image and Device Tree from ${mender_uboot_root} and to run mender_setup. Make sure that you are loading the Linux kernel image and Device Tree file from the root filesystem /boot directory.
      setenv bootargs 'console=${console} root=${mender_kernel_root} rootwait'
      setenv mmcboot 'load ${mender_uboot_root} ${fdt_addr_r} boot/my-device-tree.dtb; load ${mender_uboot_root} ${kernel_addr_r} boot/zImage; bootz ${kernel_addr_r} - ${fdt_addr_r}'
      setenv bootcmd 'run mender_setup; run mmcboot'
      

    Mender’s client recipe

    As stated in the introduction, Mender has a client, in the form of a userspace application, that will be used on the target. Mender’s layer has a Yocto recipe for it but it does not have our server certificates. To establish a connection between the client and the server, the certificates have to be installed in the image. For that, a bbappend recipe will be created. It will also allow to perform additional Mender configuration, such as defining the server URL.

    • Create a bbappend for the Mender recipe
    • FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
      SRC_URI_append = " file://server.crt"
      MENDER_SERVER_URL = "https://mender.senic.com"
      
    • Copy your server certificates in the bbappend recipe folder

    Recompile an image and now, we should have everything we need to be able to update an image. Do not hesitate to run the integration checklist, it is really a convenient way to know if everything is correctly configured (or not).

    If you want to be more robust and secure, you can sign your artifacts to be sure that they come from a trusted source. If you want this feature, have a look at this documentation.

    Usage

    Standalone mode

    To update an artifact using the standalone mode (i.e. without server), here are the commands to use. You will need to update them according to your needs.

    • On your work station, create a simple HTTP server in your Yocto deploy folder:
    • $ python -m SimpleHTTPServer
    • On the target, start mender in standalone mode
    • $ mender -log-level info -rootfs http://192.168.42.251:8000/foobar.mender

      You can also use the mender command to start an update from a local .mender file, provided by a USB key or SD card.

    • Once finished, you will have to reboot the target manually
    • $ reboot

      After the first reboot, you will be on the the new active partition (if the previous one was /dev/mmcblk0p2, you should be on /dev/mmcblk0p3). Check the kernel version, artifact name or command line:

      $ uname -a
      $ cat /etc/mender/artifact_info
      $ cat /proc/cmdline
      

      If you are okay with this update, you will have to commit the modification otherwise the update will not be persistent and once you will reboot the board, Mender will rollback to the previous partition:

      $ mender -commit

    Using Mender’s server UI

    The Mender server UI provides a management interface to deploy updates on all your devices. It knows about all your devices, their current software version, and you can plan deployments on all or a subset of your devices. Here are the basic steps to trigger a deployment:

    • Login (or create an account) into the mender server UI: https://mender.foobar.com:443
    • Power-up your device
    • The first time, you will have to authorize the device. You will find it in your “dashboard” or in the “devices” section.
    • After authorizing it, it will retrieve device information such as current software version, MAC address, network interface, and so on
    • To update a partition, you will have to create a deployment using an artifact.
    • Upload the new artifact in the server UI using the “Artifacts” section
    • Deploy the new artifact using the “deployment” or the “devices” section. You will retrieve the status of the deployment in the “status” field. It will be in “installing”, “rebooting”, etc. The board will reboot and the partition should be updated.

    Troubleshooting

    Here are some issues we faced when we integrated Mender for our device. The Mender documentation also has a troubleshooting section so have a look at it if you are facing issues. Otherwise, the community seems to be active even if we did not need to interact with it as it worked like a charm when we tried it.

    Update systemd’s service starting

    By default, the Mender systemd service will start after the service “resolved” the domain name. On our target device, the network was available only via WiFi. We had to wait for the wlan0 interface to be up and configured to automatically connect a network before starting Mender’s service. Otherwise, it leads to an error due to the network being unreachable. To solve this issue which is specific to our platform, we set the systemd dependencies to “network-online.target” to be sure that a network is available:

    -After=systemd-resolved.service
    +After=network-online.target
    +Wants=network-online.target
    

    It now matches our use case because the Mender service will start only if the wlan0 connection is available and working.

    Certificate expired

    The certificates generated and used by Mender have a validity period. In case your board does not have a RTC set, Mender can fail with the error:

    systemctl status mender
    [...]
    ... level=error msg="authorize failed: transient error: authorization request failed: failed to execute authorization request:
    Post https:///api/devices/v1/authentication/auth_requests: x509: certificate has expired or is not yet valid" module=state
    

    To solve this issue, update the date on your board and make sure your RTC is correctly set.

    Device deletion

    While testing Mender’s server (version 1.0), we always used the same board and got into the issue that the board was already registered in the Server UI but had a different Device ID (which is used by Mender to identify devices). Because of that, the server was always rejecting the authentication. The next release of the Mender server offers the possibility to remove a device so we updated the Mender’s server to the last version.

    Deployments not taken into account

    Note that the Mender’s client is checking by default every 30 minutes if a deployment is available for this device. During testing, you may want to reduce this period, which you can in the Mender’s configuration file using its UpdatePollIntervalSeconds variable.

    Conclusion

    Mender is an OTA updater for Embedded devices. It has a great documentation in the form of tutorials which makes the integration easy. While testing it, the only issues we got were related to our custom platform or were already indicated in the documentation. Deploying it on a board was not difficult, only some U-Boot/kernel and Yocto Project modifications were necessary. All in all, Mender worked perfectly fine for our project!

    by Mylène Josserand at November 16, 2017 09:06 AM

    November 15, 2017

    Free Electrons

    Back from ELCE: selection of talks from the Free Electrons team

    As discussed in our previous blog post, Free Electrons had a strong presence at the Embedded Linux Conference Europe, with 7 attendees, 4 talks, one BoF and one poster during the technical show case.

    In this blog post, we would like to highlight a number of talks from the conference that we found interesting. Each Free Electrons engineer who attended the conference has selected one talk, and gives his/her feedback about this talk.

    uClibc Today: Still Makes Sense – Alexey Brodkin

    Talk selected by Michael Opdenacker

    uClibc Today: Still Makes Sense – Alexey BrodkinAlexey Brodkin, an active contributor to the uClibc library, shared recent updates about this C library, trying to show people that the project was still active and making progress, after a few years during which it appeared to be stalled. Alexey works for Synopsys, the makers of the ARC architecture, which uClibc supports.

    If you look at the repository for uClibc releases, you will see that since version 1.0.0 released in 2015, the project has made 26 releases according to a predictable schedule. The project also runs runtime regression tests on all its releases, which weren’t done before. The developers have also added support for 4 new architectures (arm64 in particular), and uClibc remains the default C library that Buildroot proposes.

    Alexey highlighted that in spite of the competition from the musl library, which causes several projects to switch from uClibc to musl, uClibc still makes a lot of sense today. As a matter of fact, it supports more hardware architectures than glibc and musl do, as it’s the only one to support platforms without an MMU (such as noMMU ARM, Blackfin, m68k, Xtensa) and as the library size is still smaller that what you get with musl (though a static hello_world program is much smaller with musl if you have a close look at the library comparison tests he mentioned).

    Alexey noted that the uClibc++ project is still alive too, and used in OpenWRT/Lede by default.

    Read the slides and watch the video of his talk.

    Identifying and Supporting “X-Compatible” hardware blocks – Chen-Yu Tsai

    Talk selected by Quentin Schulz

    Identifying and Supporting “X-Compatible” hardware blocks – Chen-Yu TsaiAn SoC is made of multiple IP blocks from different vendors. In some cases the source or model of the hardware blocks are neither documented nor marketed by the SoC vendor. However, since there are only very few vendors of a given IP block, stakes are high that your SoC vendor’s undocumented IP block is compatible with a known one.

    With his experience in developing drivers for multiple IP blocks present in Allwinner SoCs and as a maintainer of those same SoCs, Chen-Yu first explained that SoC vendors often either embed some vendors’ licensed IP blocks in their SoCs and add the glue around it for platform- or SoC-specific hardware (clocks, resets and control signals), or they clone IP blocks with the same logic but some twists (missing, obfuscated or rearranged registers).

    To identify the IP block, we can dig into the datasheet or the vendor BSP and compare those with well documented datasheets such as the one for NXP i.MX6, TI KeyStone II or the Zynq UltraScale+ MPSoC, or with mainline drivers. Asking the community is also a good idea as someone might have encountered an IP with the same behaviour before and can help us identify it quicker.

    Some good identifiers for IPs could be register layouts or names along with DMA logic and descriptor format. For the unlucky ones that have been provided only a blob, they can look for the symbols in it and that may slightly help in the process.

    He also mentioned that identifying an IP block is often the result of the developer’s experience in identifying IPs and other time just pure luck. Unfortunately, there are times when someone couldn’t identify the IP and wrote a complete driver just to be told by someone else in the community that this IP reminds him of that IP in which case the work done can be just thrown away. That’s where the community plays a strong role, to help us in our quest to identify an IP.

    Chen-Yu then went on with the presentation of the different ways to handle the multiple variants of an IP blocks in drivers. He said that the core logic of all IP drivers is usually presented as a library and that the different variants have a driver with their own resources, extra setup and use this library. Also, a good practice is to use booleans to select features of IP blocks instead of using the ID of each variant.
    For IPs whose registers have been altered, the way to go is usually to write a table for the register offsets, or use regmaps when bitfields are also modified. When the IP block differs a bit too much, custom callbacks should be used.

    He ended his talk with his return from experience on multiple IP blocks (UART, USB OTG, GMAC, EMAC and HDMI) present in Allwinner SoCs and the differences developers had to handle when adding support for them.

    Read the slides and watch the video of his talk.

    printk(): The Most Useful Tool is Now Showing its Age – Steven Rostedt & Sergey Senozhatsky

    Talks selected by Boris Brezillon. Boris also covered the related talk “printk: It’s Old, What Can We Do to Make It Young Again?” from the same speakers.

    printk(): The Most Useful Tool is Now Showing its Age – Steven Rostedt & Sergey SenozhatskyMaybe I should be ashamed of saying that but printk() is one of the basic tool I’m using to debug kernel code, and I must say it did the job so far, so when I saw these presentations talking about how to improve printk() I was a bit curious. What could be wrong in printk() implementation?
    Before attending the talks, I never digged into printk()’s code, because it just worked for me, but what I thought was a simple piece of code appeared to be a complex infrastructure with locking scheme that makes you realize how hard it is when several CPUs are involved.

    At its core, printk() is supposed to store logs into a circular buffer and push new entries to one or several consoles. In his first talk Steven Rostedt walked through the history of printk() and explained why it became more complex when support for multi CPU appeared. He also detailed why printk() is not re-entrant and the problem it causes when called from an NMI handler. He finally went through some fixes that made the situation a bit better and advertised the 2nd half of the talk driven by Sergey Senozhatsky.

    Note that between these two presentations, the printk() rework has been discussed at Kernel Summit, so Sergey already had some feedback on his proposals. While Steven presentation focused mainly on the main printk() function, Sergey gave a bit more details on why printk() can deadlock, and one of the reasons why preventing deadlocks is so complicated is that printk() delegates the ‘print to console’ aspect to console drivers which have their own locking scheme. To address that, it is proposed to move away from the callback approach and let console drivers poll for new entries in the console buffer instead, which would remove part of the locking issues. The problem with this approach is that it brings even more uncertainty on when the logs are printed on the consoles, and one of the nice things about printk() in its current form is that you are likely to have the log printed on the output before printk() returns (which helps a lot when you debug things).

    He also mentioned other solutions to address other possible deadlocks, but I must admit I got lost at some point, so if you’re interested in this topic I recommend that you watch the video (printk(): The Most Useful Tool is Now Showing its Age, sadly no video is available for the second talk) and read the slides (printk(): The Most Useful Tool is Now Showing its Age and printk: It’s Old, What Can We Do to Make It Young Again?).

    More robust I2C designs with a new fault-injection driver – Wolfram Sang

    Talk selected by Miquèl Raynal

    More robust I2C designs with a new fault-injection driver – Wolfram SangAlthough Wolfram had a lot of troubles starting its presentation lacking a proper HDMI adaptater, he gave an illuminating talk about how, as an I2C subsystem maintainer, he would like to strengthen the robustness of I2C drivers.

    He first explained some basics of the I2C bus like START and STOP conditions and introduced us to a few errors he regularly spots in drivers. For instance, some badly written drivers used a START and STOP sequence while a “repeated START” was needed. This is very bad because another master on the bus could, in this little spare idle delay, decide to grab the medium and send its message. Then the message right after the repeated START would not have the expected effect. Of course plenty other errors can happen: stalled bus (SDA or SCL stuck low), lost arbitration, faulty bits… All these situations are usually due to incorrect sequences sent by the driver.

    To avoid so much pain debugging obscure situations where this happens, he decided to use an extended I2C-gpio interface to access SDA and SCL from two external GPIOs and this way forces faulty situations by simply pinning high or low one line (or both) and see how the driver reacts. The I2C specification and framework provide everything to get out of a faulty situation, it is just a matter of using it (sending a STOP condition, clocking 9 times, operate a reset, etc).

    Wolfram is aware of his quite conservative approach but he is really scared about breaking users by using random quirks so he tried with this talk to explain his point of view and the solutions he wants to promote.

    Two questions that you might have a hard time hearing were also interesting. The first person asked if he ever considered using a “default faulty chip” designed to do by itself this kind of fault injection and see how the host reacts and behaves. Wolfram said buying hardware is too much effort for debugging, so he was more motivated to get something very easy and straightforward to use. Someone else asked if he thought about multiple clients situation, but from Wolfram’s point of view, all clients are in the same state whether the bus is busy or free and should not badly behave if we clock 9 times.

    Watch the video and grab the slides.

    HDMI 4k Video: Lessons Learned – Hans Verkuil

    Talk selected by Maxime Ripard

    HDMI 4k Video: Lessons Learned – Hans VerkuilHaving worked recently on a number of display related drivers, it was quite natural to go see what I was probably going to work on in a quite close future.

    Hans started by talking about HDMI in general, and the various signals that could go through it. He then went on with what was basically a war story about all the mistakes, gotchas and misconceptions that he encountered while working on a video-conference box for Cisco. He covered the hardware itself, but also more low-level oriented aspects, such as the clocks frequencies needed to operate properly, the various signals you could look at for debugging or the issues that might come with the associated encoding and / or color spaces, especially when you want to support as many displays as you can. He also pointed out the flaws in the specifications that might lead to implementation inconsistencies. He concluded with the flaws of various HDMI adapters, the issues that might arise using them on various OSes, and how to work around them when doable.

    Watch the video and the slides.

    The Serial Device Bus – Johan Hovold

    Talk selected by Thomas Petazzoni

    The Serial Device Bus – Johan HovoldJohan started his talk at ELCE by exposing the problem with how serial ports (UARTs) are currently handled in the Linux kernel. Serial ports are handled by the TTY layer, which allows user-space applications to send and receive data with what is connected on the other side of the UART. However, the kernel doesn’t provide a good mechanism to model the device that is connected at the other side of the UART, such as a Bluetooth chip. Due to this, people have resorted to either writing user-space drivers for such devices (which falls short when those devices need additional resources such as regulators, GPIOs, etc.) or to developing specific TTY line-discipline in the kernel. The latter also doesn’t work very well because a line discipline needs to be explicitly attached to a UART to operate, which requires a user-space program such as hciattach used in Bluetooth applications.

    In order to address this problem, Johan picked up the work initially started by Rob Herring (Linaro), which consisted in writing a serial device bus (serdev in short), which consists in turning UART into a proper bus, with bus controllers (UART controllers) and devices connected to this bus, very much like other busses in the Linux kernel (I2C, SPI, etc.). serdev was initially merged in Linux 4.11, with some improvements being merged in follow-up versions. Johan then described in details how serdev works. First, there is a TTY port controller, which instead of registering the traditional TTY character device will register the serdev controller and its slave devices. Thanks to this, one can described in its Device Tree child nodes of the UART controller node, which will be probed as serdev slaves. There is then a serdev API to allow the implementation of serdev slave drivers, that can send and receive data of the UART. Already a few drivers are using this API: hci_serdev, hci_bcm, hci_ll, hci_nokia (Bluetooth) and qca_uart (Ethernet).

    We found this talk very interesting, as it clearly explained what is the use case for serdev and how it works, and it should become a very useful subsystem for many embedded applications that use UART-connected devices.

    Watch the video and the slides.

    GStreamer for Tiny Devices – Olivier Crête

    Talk selected by Grégory Clement

    GStreamer for Tiny Devices – Olivier CrêteThe purpose of this talk was to show how to shrink Gstreamer to make it fit in an embedded Linux device. First, Olivier Crête introduced what GStreamer is, it was very high level but well done. Then after presenting the issue, he showed step by step how he managed to reduce the footprint of a GStreamer application to fit in his device.

    In a first part it was a focus on features specific to GStreamer such as how to generate only the needed plugins. Then most of the tricks showed could be used for any C or C++ application. The talk was pretty short so there was no useless or boring part Moreover, the speaker himself was good and dynamic.

    To conclude it was a very pleasant talk teaching step by step how to reduce the footprint of an application being GStreamer or not.

    Watch the video and the slides.

    by Thomas Petazzoni at November 15, 2017 08:33 AM

    November 14, 2017

    Video Circuits

    Video Circuits Workshop 01/07/17

    Alex and I are running another workshop, this time at Brighton Modular meet book here:
    https://www.attenboroughcentre.com/events/902/video-synthesis-workshop/
    We will also be running a video synthesis room all Sunday.
    http://brightonmodularmeet.co.uk/brighton-modular-meet---about.html

    Come hang out and enjoy the rest of the meet.

    Some pics of the panels we had made by Matt for Jona's CH/AV project (which we will be making on the day). There is also a shot of some audio oscillators driving the CH/AV to a VGA monitor.








    by Chris (noreply@blogger.com) at November 14, 2017 05:47 AM

    November 08, 2017

    Bunnie Studios

    A Clash of Cultures

    There’s an Internet controversy going on between Dale Dougherty, the CEO of Maker Media and Naomi Wu (@realsexycyborg), a Chinese Maker and Internet personality. Briefly, Dale Doughtery tweeted a single line questioning Naomi Wu’s authenticity, which is destroying Naomi’s reputation and livelihood in China.

    In short, I am in support of Naomi Wu. Rather than let the Internet speculate on why, I am sharing my perspectives on the situation preemptively.

    As with most Internet controversies, it’s messy and emotional. I will try my best to outline the biases and issues I have observed. Of course, everyone has their perspective; you don’t have to agree with mine. And I suspect many of my core audience will dislike and disagree with this post. However, the beginning of healing starts with sharing and listening. I will share, and I respectfully request that readers read the entire content of this post before attacking any individual point out of context.

    The key forces I see at play are:

    1. Prototype Bias – how assumptions based on stereotypes influence the way we think and feel
    2. Idol Effect – the tendency to assign exaggerated capabilities and inflated expectations upon celebrities
    3. Power Asymmetry – those with more power have more influence, and should be held to a higher standard of accountability
    4. Guanxi Bias – the tendency to give foreign faces more credibility than local faces in China

    All these forces came together in a perfect storm this past week.

    1. Prototype Bias

    If someone asked you to draw a picture of an engineer, who would you draw? As you draw the figure, the gender assigned is a reflection of your mental prototype of an engineer – your own prototype bias. Most will draw a male figure. Society is biased to assign high-level intellectual ability to males, and this bias starts at a young age. Situations that don’t fit into your prototypes can feel threatening; studies have shown that men defend their standing by undermining the success of women in STEM initiatives.

    The bias is real and pervasive. For example, my co-founder in Chibitronics, Jie Qi, is female. The company is founded on technology that is a direct result of her MIT Media Lab PhD dissertation. She is the inventor of paper electronics. I am a supporting actor in her show. Despite laying this fact out repeatedly, she still receives comments and innuendo implying that I am the inventor or more influential than I really am in the development process.

    Any engineer who observes a bias in a system and chooses not to pro-actively correct for it is either a bad engineer or they stand to benefit from the bias. So much of engineering is about compensating, trimming, and equalizing imperfections out of real systems: wrap a feedback loop around it, and force the error function to zero.

    So when Jie and I stand on stage together, prototype bias causes people to assume I’m the one who invented the technology. Given that I’m aware of the bias, does it make sense to give us equal time on the stage? No – that would be like knowing there is uneven loss in a channel and then being surprised when certain frequency bands are suppressed by the time it hits the receivers. So, I make a conscious and deliberate effort to showcase her contributions and to ensure her voice is the first and last voice you hear.

    Naomi Wu (pictured below) likely challenges your prototypical ideal of an engineer. I imagine many people feel a cognitive dissonance juxtaposing the label “engineer” or “Maker” with her appearance. The strength of that dissonant feeling is proportional to the amount of prototype bias you have.

    I’ve been fortunate to experience breaking my own prototypical notions that associate certain dress norms with intelligence. I’m a regular at Burning Man, and my theme camp is dominated by scientists and engineers. I’ve discussed injection molding with men in pink tutus and learned about plasmonics from half-naked women. It’s not a big leap for me to accept Naomi as a Maker. I’m glad she’s challenging these biases. I do my best engineering when sitting half-naked at my desk. I find shirts and pants to be uncomfortable. I don’t have the strength to challenge these social norms, and secretly, I’m glad someone is.

    Unfortunately, prototype bias is only the first challenge confronted in this situation.

    2. Idol Effect

    The Idol Effect is the tendency to assign exaggerated capabilities to public figures and celebrities. The adage “never meet your childhood hero” is a corollary of the Idol Effect – people have inflated expectations about what celebrities can do, so it’s often disappointing when you find out they are humans just like us.

    One result of the Idol Effect is that people feel justified taking pot shots at public figures for their shortcomings. For example, I have had the great privilege of working with Edward Snowden. One of my favorite things about working with him is that he is humble and quick to correct misconceptions about his personal abilities. Because of his self-awareness of his limitations, it’s easier for me to trust his assertions, and he’s also a fast learner because he’s not afraid to ask questions. Notably, he’s never claimed to be a genius, so I’m always taken aback when intelligent people pull me aside and whisper in my ear, “You know, I hear Ed’s a n00b. He’s just using you.” Somehow, because of Ed’s worldwide level of fame that’s strongly associated with security technology, people assume he should be a genius level crypto-hacker and are quick to point out that he’s not. Really? Ed is risking his life because he believes in something. I admire his dedication to the cause, and I enjoy working with him because he’s got good ideas, a good heart, and he’s fun to be with.

    Because I also have a public profile, the Idol Effect impacts me too. I’m bad at math, can’t tie knots, a mediocre programmer…the list goes on. If there’s firmware in a product I’ve touched, it’s likely to have been written by Sean ‘xobs’ Cross, not me. If there’s analytics or informatics involved, it’s likely my partner wrote the analysis scripts. She also edits all my blog posts (including this one) and has helped me craft my most viral tweets – because she’s a genius at informatics, she can run analyses on how to target key words and pick times of day to get maximum impact. The fact that I have a team of people helping me polish my work makes me look better than I really am, and people tend to assign capabilities to me that I don’t really have. Does this mean I am a front, fraud or a persona?

    I imagine Naomi is a victim of Idol Effect too. Similar to Snowden, one of the reasons I’ve enjoyed interacting with Naomi is that she’s been quick to correct misconceptions about her abilities, she’s not afraid to ask for help, and she’s a quick learner. Though many may disapprove of her rhetoric on Twitter, please keep in mind English is her second language — her sole cultural context in which she learned English was via the Internet by reading social media and chat rooms.

    Based on the rumors I’ve read, it seems fans and observers have inflated expectations for her abilities, and because of uncorrected prototype bias, she faces extra scrutiny to prove her abilities. Somehow the fact that she almost cuts her finger using a scraper to remove a 3D print is “evidence” that she’s not a Maker. If that’s true, I’m not a Maker either. I always have trouble releasing 3D prints from print stages. They’ve routinely popped off and flown across the room, and I’ve almost cut my fingers plenty of times with the scraper. But I still keep on trying and learning – that’s the point. And then there’s the suggestion that because a man holds the camera, he’s feeding her lines.

    When a man harnesses the efforts of a team, they call him a CEO and give him a bonus. But when a woman harnesses the efforts of a team, she gets accused of being a persona and a front. This is uncorrected Prototype Bias meeting unrealistic expectations due to the Idol Effect.

    The story might end there, but things recently got a whole lot worse…

    3. Power Asymmetry

    “With great power comes great responsibilities.”
    -from Spider Man

    Power is not distributed evenly in the world. That’s a fact of life. Not acknowledging the role power plays leads to systemic abuse, like those documented in the Caldbeck or Weinstein scandals.

    Editors and journalists – those with direct control over what gets circulated in the media – have a lot of power. Their thoughts and opinions can reach and influence a massive population very quickly. Rumors are just rumors until media outlets breathe life into them, at which point they become an incurable cancer on someone’s career. Editors and journalists must be mindful of the power they wield and held accountable for when it is mis-used.

    As CEO of Maker Media and head of an influential media outlet, especially among the DIY community, Dale Dougherty wields substantial power. So a tweet promulgating the idea that Naomi might be a persona or a fake does not land lightly. In the post-truth era, it’s especially incumbent upon traditional media to double-check rumors before citing them in any context.

    What is personally disappointing is that Dale reached out to me on November 2nd with an email asking what I thought about an anonymous post that accused Naomi of being a fake. I vouched for Naomi as a real person and as a budding Maker; I wrote back to Dale that “I take the approach of interacting with her like any other enthusiastic, curious Maker and the resulting interactions have been positive. She’s a fast learner.”

    Yet Dale decided to take an anonymous poster’s opinion over mine (despite a long working relationship with Make), and a few days later on November 5th he tweeted a link to the post suggesting Naomi could be a fake or a fraud, despite having evidence of the contrary.

    So now Naomi, already facing prototype bias and idol-effect expectations, gets a big media personality with substantial power propagating rumors that she is a fake and a fraud.

    But wait, it gets worse because Naomi is in China!

    4. Guanxi Bias

    In China, guanxi (关系) is everything. Public reputation is extremely hard to build, and quick to lose. Faking and cloning is a real problem, but it’s important to not lose sight of the fact that there are good, hard-working people in China as well. So how do the Chinese locals figure out who to trust? Guanxi is a major mechanism used inside China to sort the good from the bad – it’s a social network of credible people vouching for each other.

    For better or for worse, the Chinese feel that Western faces and brands are more credible. The endorsement of a famous Western brand carries a lot of weight; for example Leonardo DiCaprio is the brand ambassador for BYD (a large Chinese car maker).

    Maker Media has a massive reputation in China. From glitzy Maker Faires to the Communist party’s endorsement of Maker-ed and Maker spaces as a national objective, an association or the lack thereof with Maker Media can make or break a reputation. This is no exception for Naomi. Her uniqueness as a Maker combined with her talent at marketing has enabled her to do product reviews and endorsements as source of income.

    However, for several years she’s been excluded from the Shenzhen Maker Faire lineup, even in events that she should have been a shoo-in for her: wearables, Maker fashion shows, 3D printing. Despite this lack of endorsement, she’s built her own social media follower base both inside and outside of China, and built a brand around herself.

    Unfortunately, when the CEO of Maker Media, a white male leader of an established American brand, suggested Naomi was a potential fake, the Internet inside China exploded on her. Sponsors cancelled engagements with her. Followers turned into trolls. She can’t be seen publicly with men (because others will say the males are the real Maker, see “prototype bias”), and as a result faces a greater threat of physical violence.

    A single innuendo, amplified by Power Asymmetry and Guanxi Bias, on top of Idol Effect meshed against Prototype Bias, has destroyed everything a Maker has worked so hard to build over the past few years.

    If someone spread lies about you and destroyed your livelihood – what would you do? Everyone would react a little differently, but make no mistake: at this point she’s got nothing left to lose, and she’s very angry.

    Reflection

    Although Dale had issued a public apology about the rumors, the apology fixes her reputation as much as saying “sorry” repairs a vase smashed on the floor.

    Image: Mindy Georges CC BY-NC

    At this point you might ask — why would Dale want to slander Naomi?

    I don’t know the background, but prior to Dale’s tweet, Naomi had aggressively dogged Dale and Make about Make’s lack of representation of women. Others have noted that Maker Media has a prototype bias toward white males. Watch this analysis by Leah Buechley, a former MIT Media Lab Professor:

    Dale could have recognized and addressed this core issue of a lack of diversity. Instead, Dale elected to endorse unsubstantiated claims and destroy a young female Maker’s reputation and career.

    Naomi has a long, uphill road ahead of her. On the other hand, I’m sure Dale will do fine – he’s charismatic, affable, and powerful.

    When I sit and think, how would I feel if this happened to the women closest to me? I get goosebumps – the effect would be chilling; the combination of pervasive social biases would overwhelm logic and fact. So even though I may not agree with everything Naomi says or does, I have decided that in the bigger picture, hiding in complicit silence on the sidelines is not acceptable.

    We need to acknowledge that prototype bias is real; if equality is the goal, we need to be proactive in correcting it. Just because someone is famous doesn’t mean they are perfect. People with power need to be held accountable in how they wield it. And finally, cross-cultural issues are complicated and delicate. All sides need to open their eyes, ears, and hearts and realize we’re all human. Tweets may seem like harmless pricks to the skin, but we all bleed when pricked. For humanity to survive, we need to stop pricking each other lest we all bleed to death.

    /me dons asbestos suit

    Update: November 20, 2017
    Make has issued an apology, and Naomi has accepted the apology. My sincere thanks to the effort and dedication of everyone who helped make this right.

    by bunnie at November 08, 2017 03:19 PM

    November 06, 2017

    Harald Welte

    On the Linux Kernel Enforcement Statement

    I'm late with covering this here, but work overload is having its toll on my ability to blog.

    On October 16th, key Linux Kernel developers have released and anounced the Linux Kernel Community Enforcement Statemnt.

    In its actual text, those key kernel developers cover

    • compliance with the reciprocal sharing obligations of GPLv2 is critical and mandatory
    • acknowledgement to the right to enforce
    • expression of interest to ensure that enforcement actions are conducted in a manner beneficial to the larger community
    • a method to provide reinstatement of rights after ceasing a license violation (see below)
    • that legal action is a last resort
    • that after resolving any non-compliance, the formerly incompliant user is welcome to the community

    I wholeheartedly agree with those. This should be no surprise as I've been one of the initiators and signatories of the earlier statement of the netfilter project on GPL enforcement.

    On the reinstatement of rights

    The enforcement statement then specifically expresses the view of the signatories on the specific aspect of the license termination. Particularly in the US, among legal scholars there is a strong opinion that if the rights under the GPLv2 are terminated due to non-compliance, the infringing entity needs an explicit reinstatement of rights from the copyright holder. The enforcement statement now basically states that the signatories believe the rights should automatically be re-instated if the license violation ceases within 30 days of being notified of the license violation

    To people like me living in the European (and particularly German) legal framework, this has very little to no implications. It has been the major legal position that any user, even an infringing user can automatically obtain a new license as soon as he no longer violates. He just (really or imaginary) obtains a new copy of the source code, at which time he again gets a new license from the copyright holders, as long as he fulfills the license conditions.

    So my personal opinion as a non-legal person active in GPL compliance on the reinstatement statement is that it changes little to nothing regarding the jurisdiction that I operate in. It merely expresses that other developers express their intent and interest to a similar approach in other jurisdictions.

    by Harald Welte at November 06, 2017 11:00 PM

    SFLC sues SFC over trademark infringement

    As the Software Freedom Conservancy (SFC) has publicly disclosed on their website, it appears that Software Freedom Law Center (SFLC) has filed for a trademark infringement lawsuit against SFC.

    SFLC has launched SFC in 2006, and SFLC has helped and endorsed SFC in the past.

    This lawsuit is hard to believe. What has this community come to, if its various members - who used all to be respected equally - start filing law suits against each other?

    It's of course not known what kind of negotiations might have happened out-of-court before an actual lawsuit has been filed. Nevertheless, one would have hoped that people are able to talk to each other, and that the mutual respect for working at different aspects and with possibly slightly different strategies would have resulted in a less confrontational approach to resolving any dispute.

    To me, this story just looks like there can only be losers on all sides, by far not just limited to the two entities in question.

    On lwn.net some people, including high-ranking members of the FOSS community have started to spread conspiracy theories as to whether there's any secret scheming behind the scenes, particularly from the Linux Foundation towards SFLC to cause trouble towards the SFC and their possibly-not-overly-enjoyed-by-everyone enforcement activities.

    I think this is complete rubbish. Neither have I ever had the impression that the LF is completely opposed to license enforcement to begin with, nor do I have remotely enough phantasy to see them engage in such malicious scheming.

    What motivates SFLC and/or Eben to attack their former offspring is however unexplainable to the bystander. One hopes there is no connection to his departure from FSF about one year ago, where he served as general counsel for more than two decades.

    by Harald Welte at November 06, 2017 11:00 PM

    November 03, 2017

    Free Electrons

    Back from Kernel Recipes: slides and videos

    As we announced previously, we participated to the Embedded Recipes and Kernel Recipes conferences in September in Paris. Three people from Free Electrons attended the event: Free Electrons CEO Michael Opdenacker, and Free Electrons engineers Mylène Josserand and Maxime Ripard.

    Introduction to Yocto Project / OpenEmbedded, by Mylène Josserand

    Mylène Josserand gave an Introduction to the Yocto Project / OpenEmbedded-core. The slides are available in PDF or as LaTeX code.

    An introduction to the Linux DRM subsystem, by Maxime Ripard

    Maxime Ripard gave an Introduction to the Linux DRM subsystem. The slides are available in PDF or as LaTeX code.

    Other videos and slides

    The slides of all talks are available on the Embedded Recipes and Kernel Recipes conference websites. Youtube playlists are available for Embedded Recipes 2017 and Kernel Recipes 2017 as well.

    Conclusion

    With its special one track format, an attendance limited to 100 people, excellent choice of talks and nice social events, Kernel Recipes remains a very good conference that we really enjoyed. Embedded Recipes, which was in its first edition this year, followed the same principle, with the same success. We’re looking forward to attending next year editions, and hopefully contributing a few talks as well. See you all at Embedded and Kernel Recipes in 2018!

    by Thomas Petazzoni at November 03, 2017 08:44 AM

    November 02, 2017

    Free Electrons

    Back from ELCE 2017: slides and videos

    Free Electrons participated to the Embedded Linux Conference Europe last week in Prague. With 7 engineers attending, 4 talks, one BoF and a poster at the technical showcase, we had a strong presence to this major conference of the embedded Linux ecosystem. All of us had a great time at this event, attending interesting talks and meeting numerous open-source developers.

    Free Electrons team at the Embedded Linux Conference Europe 2017

    Free Electrons team at the Embedded Linux Conference Europe 2017. Top, from left to right: Maxime Ripard, Grégory Clement, Boris Brezillon, Quentin Schulz. Bottom, from left to right: Miquèl Raynal, Thomas Petazzoni, Michael Opdenacker.

    In this first blog post about ELCE, we want to share the slides and videos of the talks we have given during the conference.

    SD/eMMC: New Speed Modes and Their Support in Linux – Gregory Clement

    Grégory ClementSince the introduction of the original “default”(DS) and “high speed”(HS) modes, the SD card standard has evolved by introducing new speed modes, such as SDR12, SDR25, SDR50, SDR104, etc. The same happened to the eMMC standard, with the introduction of new high speed modes named DDR52, HS200, HS400, etc. The Linux kernel has obviously evolved to support these new speed modes, both in the MMC core and through the addition of new drivers.

    This talk will start by introducing the SD and eMMC standards and how they work at the hardware level, with a specific focus on the new speed modes. With this hardware background in place, we will then detail how these standards are supported by Linux, see what is still missing, and what we can expect to see in the future.

    Slides [PDF], Slides [LaTeX source]

    An Overview of the Linux Kernel Crypto Subsystem – Boris Brezillon

    Boris BrezillonThe Linux kernel has long provided cryptographic support for in-kernel users (like the network or storage stacks) and has been pushed to open these cryptographic capabilities to user-space along the way.

    But what is exactly inside this subsystem, and how can it be used by kernel users? What is the official userspace interface exposing these features and what are non-upstream alternatives? When should we use a HW engine compared to a purely software based implementation? What’s inside a crypto engine driver and what precautions should be taken when developing one?

    These are some of the questions we’ll answer throughout this talk, after having given a short introduction to cryptographic algorithms.

    Slides [PDF], Slides [LaTeX source]

    Buildroot: What’s New? – Thomas Petazzoni

    Thomas PetazzoniBuildroot is a popular and easy to use embedded Linux build system. Within minutes, it is capable of generating lightweight and customized Linux systems, including the cross-compilation toolchain, kernel and bootloader images, as well as a wide variety of userspace libraries and programs.

    Since our last “What’s new” talk at ELC 2014, three and half years have passed, and Buildroot has continued to evolve significantly.

    After a short introduction about Buildroot, this talk will go through the numerous new features and improvements that have appeared over the last years, and show how they can be useful for developers, users and contributors.

    Slides [PDF], Slides [LaTeX source]

    Porting U-Boot and Linux on New ARM Boards: A Step-by-Step Guide – Quentin Schulz

    May it be because of a lack of documentation or because we don’t know where to look or where to start, it is not always easy to get started with U-Boot or Linux, and know how to port them to a new ARM platform.

    Based on experience porting modern versions of U-Boot and Linux on a custom Freescale/NXP i.MX6 platform, this talk will offer a step-by-step guide through the porting process. From board files to Device Trees, through Kconfig, device model, defconfigs, and tips and tricks, join this talk to discover how to get U-Boot and Linux up and running on your brand new ARM platform!

    Slides [PDF], Slides [LaTeX source]

    BoF: Embedded Linux Size – Michael Opdenacker

    This “Birds of a Feather” session will start by a quick update on available resources and recent efforts to reduce the size of the Linux kernel and the filesystem it uses.

    An ARM based system running the mainline kernel with about 3 MB of RAM will also be demonstrated.

    If you are interested in the size topic, please join this BoF and share your experience, the resources you have found and your ideas for further size reduction techniques!

    Slides [PDF], Slides [LaTeX source]

    by Thomas Petazzoni at November 02, 2017 10:54 AM

    October 31, 2017

    Free Electrons

    Free Electrons at NetDev 2.2

    NetDev 2.2Back in April 2017, Free Electrons engineer Antoine Ténart participated to NetDev 2.1, the most important conference discussing Linux networking support. After the conference, Antoine published a summary of it, reporting on the most interesting talks and topics that have been discussed.

    Next week, NetDev 2.2 takes place in Seoul, South Korea, and this time around, two Free Electrons engineers will be attending the event: Alexandre Belloni and Antoine Ténart. We are getting more and more projects with networking related topics, and therefore the wide range of talks proposed at NetDev 2.2 will definitely help grow our expertise in this field.

    Do not hesitate to get in touch with Alexandre or Antoine if you are also attending this event!

    by Thomas Petazzoni at October 31, 2017 11:00 AM

    October 30, 2017

    Open Hardware Repository

    WRS Low Jitter Daughterboard - WRS Low Jitter board measurement results

    - Additional hardware can improve jitter by two orders of magnitude -

    Chantal van Tour and Jeroen Koelemeij have made new measurements of the performance of the White Rabbit switch [1] with Mattia Rizzi’s additional Low jitter daughterboard (LJD) [2] integrated and enhanced even more with another clean-up oscillator. They describe their incredibly good results in the article “Sub -Nanosecond Time Accuracy and Frequency Distribution” [3]:

    The LJD improves the stability by about a factor of five, in good agreement with results earlier obtained by Rizzi et al. The LJD implementation improves the root mean-square (rms) phase jitter, integrated over the range 1 Hz – 100 kHz, from 13 ps to 2.9 ps.

    While the LJD implementation already significantly improves the default WR Slave phase noise in the low-frequency range, the PLO (clean-up oscillator) leads to a further suppression of noise by 30 to 40 dB over the frequency range 10 Hz – 100 kHz... As a result, the rms jitter of the 10 MHz output is as low as 0.18 ps over the range 1 Hz – 100 kHz, an improvement by nearly two orders of magnitude over default WR.

    The results presented here suggest that the improved WR system may in certain cases provide a good alternative for hydrogen masers. This may be true in particular for situations in which the observational phase coherence is limited by atmospheric conditions, rather than by local-oscillator stability. Another advantage of a WR-based frequency distribution system is that it also distributes phase. In other words, all nodes in the WR network will have the same phase, which may allow reducing the ‘search window’ in initial fringe searches.

    Note that the results with the additional clean-up oscillator require the White Rabbit Grandmaster (GM) to be locked to a sufficiently stable oscillator, such as a rubidium or cesium clock, or a hydrogen maser. The clean-up oscillator will not be able to track the free-running clock of a WR master, which is too unstable. With only the Low jitter daughterboard it will work fine on a free-running GM.

    We will discuss on how to proceed to make this hardware available via commercial partners, likely as special low-jitter versions of the White Rabbit Switch.

    [1] https://www.ohwr.org/projects/wr-switch-hw/wiki
    [2] https://www.ohwr.org/projects/wrs-low-jitter/wiki
    [3] https://library.nrao.edu/public/memos/ngvla/NGVLA_22.pdf

    by Erik van der Bij (Erik.van.der.Bij@cern.ch) at October 30, 2017 09:29 AM

    Bunnie Studios

    LiteX vs. Vivado: First Impressions

    Previously, I had written about developing a reference design for the NeTV2 FPGA using Xilinx’s Vivado toolchain. Last year at 33C3 Tim ‘mithro’ Ansell introduced me to LiteX and at his prompting I decided to give it a chance.

    Vivado was empowering because instead of having to code up a complex SoC in Verilog, I could use their pseudo-GUI/TCL interface to create a block diagram that largely automated the task of building the AXI routing fabric. Furthermore, I could access Xilinx’s extensive IP library, which included a very flexible DDR memory controller and a well-vetted PCI-express controller. Because of this level of design automation and available IP, a task that would have taken perhaps months in Verilog alone could be completed in a few days with the help of Vivado.

    The downsides of Vivado are that it’s not open source (free to download, but not free to modify), and that it’s not terribly efficient or speedy. Aside from the ideological objections to the closed-source nature of Vivado, there are some real, pragmatic impacts from the lack of source access. At a high level, Xilinx makes money selling FPGAs – silicon chips. However, to attract design wins they must provide design tools and an IP ecosystem. The development of this software is directly subsidized by the sale of chips.

    This creates an interesting conflict of interest when it comes to the efficiency of the tools – that is, how good they are at optimizing designs to consume the least amount of silicon possible. Spending money to create area-efficient tools reduces revenue, as it would encourage customers to buy cheaper silicon.

    As a result, the Vivado tool is pretty bad at optimizing designs for area. For example, the PCI express core – while extremely configurable and well-vetted – has no way to turn off the AXI slave bridge, even if you’re not using the interface. Even with the inputs unconnected or tied to ground, the logic optimizer won’t remove the unused gates. Unfortunately, this piece of dead logic consumes around 20% of my target FPGA’s capacity. I could only reclaim that space by hand-editing the machine-generated VHDL to comment out the slave bridge. It’s a simple enough thing to do, and it had no negative effects on the core’s functionality. But Xilinx has no incentive to add a GUI switch to disable the logic, because the extra gates encourage you to “upgrade” by one FPGA size if your design uses a PCI express core. Similarly, the DDR3 memory core devotes 70% of its substantial footprint to a “calibration” block. Calibration typically runs just once at boot, so the logic is idle during normal operation. With an FPGA, the smart thing to do would be to run the calibration, store the values, and then jam the pre-measured values into the application design, thus eliminating the overhead of the calibration block. However, I couldn’t implement this optimization since the DDR3 block is provided as an opaque netlist. Finally, the AXI fabric automation – while magical – scales poorly with the number of ports. In my most recent benchmark design done with Vivado, 50% of the chip is devoted to the routing fabric, 25% to the DDR3 block, and the remainder to my actual application logic.

    Tim mentioned that he thought the same design when using LiteX would fit in a much smaller FPGA. He has been using LiteX to generate the FPGA “gateware” (bitstreams) to support his HDMI2USB video processing pipelines on various platforms, ranging from the Numato-Opsis to the Atlys, and he even started a port for the NeTV2. Intrigued, I decided to port one of my Vivado designs to LiteX so that I could do an apples-to-apples comparison of the two design flows.

    LiteX is a soft-fork of Migen/MiSoC – a python-based framework for managing hardware IP and auto-generating HDL. The IP blocks within LiteX are completely open source, and so can be targeted across multiple FPGA architectures. However, for low-level synthesis, place & route, and bitstream generation, it still relies upon proprietary chip-specific vendor tools, such as Vivado when targeting Artix FPGAs. It’s a little bit like an open source C compiler that spits out assembly, so it still requires vendor-specific assemblers, linkers, and binutils. While it may seem backward to open the compiler before the assembler, remember that for software, an assembler’s scope of work is simple — primarily within well-defined 32-bit or so opcodes. However, for FPGAs, the “assembler” (place and route tool) has the job of figuring out where to place single-bit primitives within an “opcode” that’s effectively several million bits long, with potential cross-dependencies between every bit. The abstraction layers, while parallel, aren’t directly comparable.

    Let me preface my experience with the statement that I have a love-hate relationship with Python. I’ve used Python a few times for “recreational” projects and small tools, and for driving bits of automation frameworks. But I’ve found Python to be terribly frustrating. If you can use their frameworks from the ground-up, it’s intuitive, fun, even empowering. But if your application isn’t naturally “Pythonic”, woe to you. And I have a lot of needs for bit-banging, manipulating binary files, or grappling with low-level hardware registers, activities that are decidedly not Pythonic. I also spend a lot of time fighting with the “cuteness” of the Python type system and syntax: I’m more of a Rust person. I like strictly typed languages. I am not fond of novelties like using “-1” as the last-element array index and overloading the heck out of binary operators using magic methods.



    Comics courtesy of xkcd, CC BY-NC-2.5

    Surprisingly, I was able to get LiteX up and running within a day. This is thanks in large part to Tim’s effort to create a really comprehensive bootstrapping script that checks out the git repo, all of the submodules (thank you!), and manages your build environment. It just worked; the only bump I encountered was a bit of inconsistent documentation on installing the Xilinx toolchain (for Artix builds you need to grab Vivado; and Spartan you grab ISE). The whole thing ate about 19GiB of hard drive space, of which 18GiB is the Vivado toolchain.

    I was rewarded with a surprisingly powerful and mature framework for defining SoCs. Thanks to the extensive work of the MiSoC and LiteX crowd, there’s already IP cores for DRAM, PCI express, ethernet, video, a softcore CPU (your choice of or1k or lm32) and more. To be fair, I haven’t been able to load these on real hardware and validate their spec-compliance or functionality, but they seem to compile down to the right primitives so they’ve got the right shape and size. Instead of AXI, they’re using Wishbone for their fabric. It’s not clear to me yet how bandwidth-efficient the MiSoC fabric generator is, but the fact that it’s already in use to route 4x HDMI connections to DRAM on the Numato-Opsis would indicate that it’s got enough horsepower for my application (which only requires 3x HDMI connections).

    As a high-level framework, it’s pretty magical. Large IP instances and corresponding bus ports are allocated on-demand, based on a very high level description in Python. I feel a bit like a toddler who has been handed a loaded gun with the safety off. I’m praying the underlying layers are making sane inferences. But, at least in the case of LiteX, if I don’t agree with the decisions, it’s open source enough that I could try to fix things, assuming I have the time and gumption to do so.

    For my tool flow comparison, I implemented a simple 2x HDMI-in to DDR3 to 1x HDMI-out design in both Vivado and in LiteX. Creating the designs is about the same effort on both flows – once you have the basic IP blocks, instantiating bus fabric and allocation of addressing is largely automated in each case. Vivado is superior for pin/package layout thanks to its graphical planning tool (I find an illustration of the package layout to be much more intuitive than a textual list of ball-grid coordinates), and LiteX is a bit faster for design creation despite the usual frustrations I have with Python (up to the reader’s bias to decide whether it’s just that I have a different way of seeing things or if my intellect is insufficient to fully appreciate the goodness that is Python).


    Pad layout planning in Vivado is aided by a GUI


    Example of LiteX syntax for pin constraints

    But from there, the experience between the two diverges rapidly. The main thing that’s got me excited about LiteX is the speed and efficiency of its high-level synthesis. LiteX produces a design that uses about 20% of an XC7A50 FPGA with a runtime of about 10 minutes, whereas Vivado produces a design that consumes 85% of the same FPGA with a runtime of about 30-45 minutes.

    Significantly, LiteX tends to “fail fast”, so syntax errors or small problems with configurations become obvious within a few seconds, if not a couple minutes. However, Vivado tends to “fail late” – a small configuration problem may not pop up until about 20 minutes into the run, due to the clumsy way it manages out-of-context block synthesis and build dependencies. This means that despite my frustrations with the Python syntax, the penalty paid for small errors is much less in terms of time – so overall, I’m more productive.

    But the really compelling point is the efficiency. The fact that LiteX generates more efficient HDL means I can potentially shave a significant amount of cost out of a design by going to a smaller FPGA. Remember, both LiteX and Vivado use the same back-end for low-level sythesis and place and route. The difference is entirely in the high-level design automation – and this is a level that I can see being a good match for a Python-based framework. You’re not really designing hardware with Python (eventually it all turns into Verilog) so much as managing and configuring libraries of IP, something that Python is quite well suited for. To wit, I dug around in the MiSoC libraries a bit and there seem to be some serious logic designs using this Python syntax. I’m not sure I want to wrap my head around this coding style, but the good news is I can still write my leaf cells in Verilog and call them from the high-level Python integration framework.

    So, I’m cautiously proceeding to use LiteX as the main design flow going forward for NeTV2. We’ll see how the bitstream proves out in terms of timing and functionality once my next generation hardware is available, but I’m optimistic. I have a few concerns about how debugging will work – I’ve found the Xilinx ILA cores to be extremely powerful tools and the ability to automatically reverse engineer any complex design into a schematic (a feature built into Vivado) helps immensely with finding timing and logic bugs. But with a built-in soft CPU core, the “LiteScope” logic analyzer (with sigrok support coming soon), and fast build times, I have a feeling there is ample opportunity to develop new, perhaps even more powerful methods within LiteX to track down tricky bugs.

    My final thought is that LiteX, in its current state, is probably best suited for people trained to write software who want to design hardware, rather than for people classically trained in circuit design who want a tool upgrade. The design idioms and intuitions built into LiteX pulls strongly from the practices of software designers, which means a lot of “obvious” things are left undocumented that will throw outsiders (e.g. hardware designers like me) for a loop. There’s no question about the power and utility of the design flow – so, as the toolchain matures and documentation improves I’m optimistic that this could become a popular design flow for hardware projects of all magnitudes.


    Interested? Tim has suggested the following links for further reading:

    by bunnie at October 30, 2017 08:45 AM

    Free Electrons

    Buildroot training course updated to Buildroot 2017.08

    BuildrootBack in June 2015, we announced the availability of a training course on Buildroot, a popular and easy to use embedded Linux build system. A year later, we updated it to cover Buildroot 2016.05. We are happy to announce a new update: we now cover Buildroot 2017.08.

    The most significant updates are:

    • Presentation of the Long Term Supported releases of Buildroot, a topic we also presented in a previous blog post
    • Appearance of the new top-level utils/ directory, containing various utilities directly useful for the Buildroot user, such as test-pkg, check-package, get-developers or scanpypi
    • Removal of $(HOST_DIR)/usr/, as everything has been moved up one level to $(HOST_DIR), to make the Buildroot SDK/toolchain more conventional
    • Document the new organization of the skeleton package, now split into several packages, to properly support various init systems. A new diagram has been added to clarify this topic.
    • List all package infrastructures that are available in Buildroot, since their number is growing!
    • Use SPDX license codes for licensing information in packages, which is now mandatory in Buildroot
    • Remove the indication that dependencies of host (i.e native) packages are derived from the dependencies of the corresponding package, since it’s no longer the case
    • Indicate that the check for hashes has been extended to also allow checking the hash of license files. This allows to detect changes in the license text.
    • Update the BR2_EXTERNAL presentation to cover the fact that multiples BR2_EXTERNAL trees are supported now.
    • Use the new relocatable SDK functionality that appeared in Buildroot 2017.08.

    The practical labs have of course been updated to use Buildroot 2017.08, but also Linux 4.13 and U-Boot 2017.07, to remain current with upstream versions. In addition, they have been extended with two additional steps:

    • Booting the Buildroot generated system using TFTP and NFS, as an alternative to the SD card we normally use
    • Using genimage to generate a complete and ready to flash SD card image

    We will be delivering this course to one of our customers in Germany next month, and are of course available to deliver it on-site anywhere in the world if you’re interested! And of course, we continue to publish, for free, all the materials used in this training session: slides and labs.

    by Thomas Petazzoni at October 30, 2017 08:40 AM

    Open Hardware Repository

    Yet Another Micro-controller (YAM) - YAM assembler &#39;yamasm&#39; V1.1

    YAM assembler yamasm (now versionned as V1.1) has got a number of enhancements; main one being the 3-operand support.

    YAM core HDL is unchanged (still V1.4).
    A new component (namely wb_yam) has been added.
    It provides some YAM support through a Whisbone interface
    • Program download
    • Debugging...

    Software related sub-directories have been re-organized.

    The yamebs utility has dependencies to some ESRF packages.
    It is delivered as is and may be useful as a basis for supporting the wb_yam component.

    This new distribution is referred to as YAM V1.4.1.

    by Christian Herve at October 30, 2017 08:37 AM

    October 25, 2017

    Bunnie Studios

    Name that Ware October 2017

    The Ware for October 2017 is shown below.

    Sometimes a ware just presents itself to you among your travels; thankfully cell phone cameras have come a long way.

    by bunnie at October 25, 2017 06:40 PM

    Winner, Name that Ware September 2017

    The Ware for September 2017 is a WP 5007 Electrometer. I’ll give this one to Ingo, for the first mention of an electrometer. Congrats, email me for your prize! And @zebonaut, agreed, polystyrene caps FTW :)

    by bunnie at October 25, 2017 06:39 PM

    October 21, 2017

    Open Hardware Repository

    1:8 Pulse/Frequency Distribution Amplifier - Trimmer-cap modification for output-skew tuning on PDA

    The pulse-distribution amplifier output skew can be tuned close to zero by the addition of a trimmer-cap on the input of the output-buffer. See initial results in Anders' blog http://www.anderswallin.net/2017/09/delay-tuning-with-trimmer-caps/

    by Anders Wallin (anders.e.e.wallin@gmail.com) at October 21, 2017 04:34 PM

    October 19, 2017

    Harald Welte

    Obtaining the local IP address of an unbound UDP socket

    Sometimes one is finding an interesting problem and is surprised that there is not a multitude of blog post, stackoverflow answers or the like about it.

    A (I think) not so uncommon problem when working with datagram sockets is that you may want to know the local IP address that the OS/kernel chooses when sending a packet to a given destination.

    In an unbound UDP socket, you basically send and receive packets with any number of peers from a single socket. When sending a packet to destination Y, you simply pass the destination address/port into the sendto() socket function, and the OS/kernel will figure out which of its local IP addresses will be used for reaching this particular destination.

    If you're a dumb host with a single default router, then the answer to that question is simple. But in any reasonably non-trivial use case, your host will have a variety of physical and/or virtual network devices with any number of addresses on them.

    Why would you want to know that address? Because maybe you need to encode that address as part of a packet payload. In the current use case that we have, it is the OsmoMGW, implementing the IETF MGCP Media Gateway Control Protocol.

    So what can you do? You can actually create a new "trial" socket, not bind it to any specific local address/port, but connect() it to the destination of your IP packets. Then you do a getsockname(), which will give you the local address/port the kernel has selected for this socket. And that's exactly the answer to your question. You can now close the "trial" socket and have learned which local IP address the kernel would use if you were to send a packet to that destination.

    At least on Linux, this works. While getsockname() is standard BSD sockets API, I'm not sure how portable it is to use it on a socket that has not been explicitly bound by a prior call to bind().

    by Harald Welte at October 19, 2017 10:00 PM

    October 13, 2017

    Bunnie Studios

    Why I’m Using Bitmarks on my Products

    One dirty secret of hardware is that a profitable business isn’t just about design innovation, or even product cost reduction: it’s also about how efficiently one can move stuff from point A to B. This explains the insane density of hardware suppliers around Shenzhen; it explains the success of Ikea’s flat-packed furniture model; and it explains the rise of Amazon’s highly centralized, highly automated warehouses.

    Unfortunately, reverse logistics – the system for handling returns & exchanges of hardware products – is not something on the forefront of a hardware startup’s agenda. In order to deal with defective products, one has to ship a product first – an all-consuming goal. However, leaving reverse logistics as a “we’ll fix it after we ship” detail could saddle the venture with significant unanticipated customer support costs, potentially putting the entire business model at risk.

    This is because logistics are much more efficient in the “forward” direction: the cost of a centralized warehouse to deliver packages to an end consumer’s home address is orders of magnitude less than it is for a residential consumer to mail that same parcel back to the warehouse. This explains the miracle of Amazon Prime, when overnighting a pair of hand-knit mittens to your mother somehow costs you $20. Now repeat the hand-knit mittens thought experiment and replace it with a big-screen TV that has to find its way back to a factory in Shenzhen. Because the return shipment can no longer take advantage of bulk shipping discounts, the postage to China is likely more than the cost of the product itself!

    Because of the asymmetry in forward versus reverse logistics cost, it’s generally not cost effective to send defective material directly back to the original factory for refurbishing, recycling, or repair. In many cases the cost of the return label plus the customer support agent’s time will exceed the cost of the product. This friction in repatriating defective product creates opportunities for unscrupulous middlemen to commit warranty fraud.

    The basic scam works like this: a customer calls in with a defective product and gets sent a replacement. The returned product is sent to a local processing center, where it may be declared unsalvageable and slated for disposal. However, instead of a proper disposal, the defective goods “escape” the processing center and are resold as new to a different customer. The duped customer then calls in to exchange the same defective product and gets sent a replacement. Rinse lather repeat, and someone gets rich quick selling scrap at full market value.

    Similarly, high-quality counterfeits can sap profits from companies. Clones of products are typically produced using cut-rate or recycled parts but sold at full price. What happens when customers then find quality issues with the clone? That’s right – they call the authentic brand vendor and ask for an exchange. In this case, the brand makes zero money on the customer but incurs the full cost of supporting a defective product. This kind of warranty fraud is pandemic in smart phones and can cost producers many millions of dollars per year in losses.


    High-quality clones, like the card on the left, can cost businesses millions of dollars in warranty fraud claims.

    Serial numbers help mitigate these problems, but it’s easy to guess a simple serial number. More sophisticated schemes tie serial numbers to silicon IDs, but that necessitates a system which can reliably download the serialization data from the factory. This might seem a trivial task but for a lot of reasons – from failures in storage media to human error to poor Internet connectivity in factories – it’s much harder than it seems to make this happen. And for a startup, losing an entire lot of serialization data due to a botched upload could prove fatal.

    As a result, most hardware startups ship products with little to no plan for product serialization, much less a plan for reverse logistics. When the first email arrives from an unhappy customer, panic ensues, and the situation is quickly resolved, but by the time the product arrives back at the factory, the freight charges alone might be in the hundreds of dollars. Repeat this exercise a few dozen times, and any hope for a profitable run is rapidly wiped out.

    I’ve wrestled with this problem on and off through several startups of my own and finally landed on a solution that looks promising: it’s reasonably robust, fraud-resistant, and dead simple to implement. The key is the bitmark – a small piece of digital data that links physical products to the blockchain.

    Most people are familiar with blockchains through Bitcoin. Bitcoin uses the blockchain as a public ledger to prevent double-spending of the same virtual coin. This same public ledger can be applied to physical hardware products through a bitmark. Products that have been bitmarked can have their provenance tracked back to the factory using the public ledger, thus hampering cloning and warranty fraud – the physical equivalent of double-spending a Bitcoin.

    One of my most recent hardware startups, Chibitronics has teamed up with Bitmark to develop an end-to-end solution for Chibitronics’ newest microcontroller product, the Chibi Chip.

    As an open hardware business, we welcome people to make their own versions of our product, but we can’t afford to give free Chibi Chips to customers that bought cut-rate clones and then report them as defective for a free upgrade to an authentic unit. We’re also an extremely lean startup, so we can’t afford the personnel to build a full serialization and reverse logistics system from scratch. This is where Bitmark comes in.

    Bitmark has developed a turn-key solution for serialization and reverse logistics triage. They issue us bitmarks as lists of unique, six-word phrases. The six-word phrases are less frustrating for users to type in than strings of random characters. We then print the phrases onto labels that are stuck onto the back of each Chibi Chip.


    Bitmark claim code on the back of a Chibi Chip

    We release just enough of these pre-printed labels to the factory to run our authorized production quantities. This allows us to trace a bitmark back to a given production lot. It also prevents “ghost shifting” – that is, authorized factories producing extra bootleg units on a midnight shift that are sold into the market at deep discounts. Bitmark created a website for us where customers can then claim their bitmarks, thus registering their product and making it eligible for warranty service. In the event of an exchange or return, the product’s bitmark is updated to record this event. Then if a product fails to be returned to the factory, it can’t be re-claimed as defective because the blockchain ledger would evidence that bitmark as being mapped to a previously returned product. This allows us to defer the repatriation of the product to the factory. It also enables us to use unverified third parties to handle returned goods, giving us a large range of options to reduce reverse logistics costs.

    Bitmark also plans to roll out a site where users can verify the provenance of their bitmarks, so buyers can check if a product’s bitmark is authentic and if it has been previously returned for problems before they buy it. This increases the buyer’s confidence, thus potentially boosting the resale value of used Chibi Chips.

    For the cost and convenience of a humble printed label, Bitmark enhances control over our factories, enables production lot traceability, deters cloning, prevents warranty fraud, enhances confidence in the secondary market, and gives us ample options to streamline our reverse logistics.

    Of course, the solution isn’t perfect. A printed label can be peeled off one product and stuck on another, so people could potentially just peel labels off good products and resell the labels to users with broken clones looking to upgrade by committing warranty fraud. This scenario could be mitigated by using tamper-resistant labels. And for every label that’s copied by a cloner, there’s one victim who will have trouble getting support on an authentic unit. Also, if users are generally lax about claiming their bitmark codes, it creates an opportunity for labels to be sparsely duplicated in an effort to ghost-shift/clone without being detected; but this can be mitigated with a website update that encouraging customers to immediately register their bitmarks before using the web-based services tied to the product. We also have to exercise care in handling lists of unclaimed phrases because, until a customer registers their bitmark claim phrase in the blockchain, the phrases have value to would-be fraudsters.

    But overall, for the cost and convenience, the solution outperforms all the other alternatives I’ve explored to date. And perhaps most importantly for hardware startups like mine that are short on time and long on tasks, printing bitmarks is simple enough for us to implement that it’s hard to justify doing anything else.

    Disclosure: I am a technical advisor and shareholder of Bitmark.

    by bunnie at October 13, 2017 02:32 PM

    October 12, 2017

    Kristian Paul

    Affordable home automation

    I wanted to monitor my chickens remotely and after some web digging found this open source software homeassistant that had allowed me to use cheap devices like RPI and some dlink/digoo cameras in order to monitor then via web.
    
    
    
    It doesn't end there, i also bought the sonoff device and after some reflash i got it working with hass mqtt, so now i can also turn on light and fans.
    
    
    
    There are also things like automation, so when I'm at home my bluetooth devices presence gets lights on, small details that helps to make your life easier, like turning off lights automatically.
    
    There is probably more to do, next challenge is voice :-)
    
    [1] http://home-assistant.io/
    [2] https://github.com/arendst/Sonoff-MQTT-OTA
    

    October 12, 2017 05:00 AM

    October 09, 2017

    Harald Welte

    Invited keynote + TTCN-3 talk at netdevconf 2.2 in Seoul

    It was a big surprise that I've recently been invited to give a keynote on netfilter history at netdevconf 2.2.

    First of all, I wouldn't have expected netfilter to be that relevant next to all the other [core] networking topics at netdevconf. Secondly, I've not been doing any work on netfilter for about a decade now, so my memory is a bit rusty by now ;)

    Speaking of Rusty: Timing wise there is apparently a nice coincidence that I'll be able to meet up with him in Berlin later this month, i.e. hopefully we can spend some time reminiscing about old times and see what kind of useful input he has for the keynote.

    I'm also asking my former colleagues and successors in the netfilter project to share with me any note-worthy events or anecdotes, particularly also covering the time after my retirement from the core team. So if you have something that you believe shouldn't miss in a keynote on netfilter project history: Please reach out to me by e-mail ASAP and let me know about it.

    To try to fend off the elder[ly] statesmen image that goes along with being invited to give keynotes about the history of projects you were working on a long time ago, I also submitted an actual technical talk: TTCN-3 and Eclipse Titan for testing protocol stacks, in which I'll cover my recent journey into TTCN-3 and TITAN land, and how I think those tools can help us in the Linux [kernel] networking community to productively produce tests for the various protocols.

    As usual for netdevconf, there are plenty of other exciting talks in the schedule

    I'm very much looking forward to both visiting Seoul again, as well as meeting lots of the excellent people involved in the Linux networking subsystems. See ya!

    by Harald Welte at October 09, 2017 10:00 PM

    October 08, 2017

    Harald Welte

    Ten years Openmoko Neo1973 release anniversary dinner

    As I noted earlier this year, 2017 marks the tenth anniversary of shipping the first Openmoko phone, the Neo1973.

    On this occasion, a number of the key people managed to gather for an anniversary dinner in Taipei. Thanks for everyone who could make it, it was very good to see them together again. Sadly, by far not everyone could attend. You have been missed!

    The award for the most crazy attendee of the meeting goes out to my friend Milosch, who has actually flown from his home in the UK to Taiwan, only to meet up with old friends and attend the anniversary dinner.

    You can some pictures in Milosch's related tweet.

    by Harald Welte at October 08, 2017 10:00 PM

    October 05, 2017

    Free Electrons

    Buildroot Long Term Support releases: from 2017.02 to 2017.02.6 and counting

    Buildroot LogoBuildroot is a widely used embedded Linux build systems. A large number of companies and projects use Buildroot to produce customized embedded Linux systems for a wide range of embedded devices. Most of those devices are now connected to the Internet, and therefore subject to attacks if the software they run is not regularly updated to address security vulnerabilities.

    The Buildroot project publishes a new release every three months, with each release providing a mix of new features, new packages, package updates, build infrastructure improvements… and security fixes. However, until earlier this year, as soon as a new version was published, the maintenance of the previous version stopped. This means that in order to stay up to date in terms of security fixes, users essentially had two options:

    1. Update their Buildroot version regularly. The big drawback is that they get not only security updates, but also many other package updates, which may be problematic when a system is in production.
    2. Stick with their original Buildroot version, and carefully monitor CVEs and security vulnerabilities in the packages they use, and update the corresponding packages, which obvisouly is a time-consuming process.

    Starting with 2017.02, the Buildroot community has decided to offer one long term supported release every year: 2017.02 will be supported one year in terms of security updates and bug fixes, until 2018.02 is released. The usual three-month release cycle still applies, with 2017.05 and 2017.08 already being released, but users interested in a stable Buildroot version that is kept updated for security issues can stay on 2017.02.

    Since 2017.02 was released on February 28th, 2017, six minor versions were published on a fairly regularly basis, almost every month, except in August:

    With about 60 to 130 commits between each minor version, it is relatively easy for users to check what has been changed, and evaluate the impact of upgrading to the latest minor version to benefit from the security updates. The commits integrated in those minor versions are carefully chosen with the idea that users should be able to easily update existing systems.

    In total, those six minor versions include 526 commits, of which 183 commits were security updates, representing roughly one third of the total number of commits. The other commits have been:

    • 140 commits to fix build issues
    • 57 commits to bump versions of packages for bug fixes. These almost exclusively include updates to the Linux kernel, using its LTS versions. For other packages, we are more conservative and generally don’t upgrade them.
    • 17 commits to address issues in the licensing description of the packages
    • 186 commits to fix miscellaneous issues, ranging from runtime issues affecting packages to bugs in the build infrastructure

    The Buildroot community has already received a number of bug reports, patches or suggestions specifically targetting the 2017.02 LTS version, which indicates that developers and companies have started to adopt this LTS version.

    Therefore, if you are interested in using Buildroot for a product, you should probably consider using the LTS version! We very much welcome feedback on this version, and help in monitoring the security vulnerabilities affecting software packages in Buildroot.

    by Thomas Petazzoni at October 05, 2017 07:42 PM

    October 04, 2017

    Harald Welte

    On Vacation

    In case you're wondering about the lack of activity not only on this blog but also in git repositories, mailing lists and the like: I've been on vacation since September 13. It's my usual "one month in Taiwan" routine, during which I spend some time in Taipei, but also take several long motorbike tours around mostly rural Taiwan.

    You can find the occasional snapshot in my twitter feed, such as the, pictures, here and there.

    by Harald Welte at October 04, 2017 10:00 PM

    October 01, 2017

    Michele's GNSS blog

    Tribute to Prof. Kai Borre


    Whilst attending the latest ION GNSS+ conference I had the confirmation that Prof. Kai Borre disappeared this Summer. He has been a very important reference to me, especially in the early stages of my career. I am sure many other radio-navigation, geodesy and DSP engineers will convene with me. I knew him pretty well and I could not find anywhere an epitaph, so I today feel compelled to my leave my tribute to him here and hope others will intimately share my feeling.


    Rest in peace Kai. My gratitude, for inspiring me until your very last moment.

    by noreply@blogger.com (Michele Bavaro) at October 01, 2017 07:52 PM

    September 30, 2017

    Bunnie Studios

    Name that Ware, September 2017

    The Ware for September 2017 is shown below.

    And here is the underside of the plug-in module from the left hand side of the PCB:

    Thanks to Chris for sending in this gorgeous ware. I really appreciate both the aesthetic beauty of this ware, as well as the exotic construction techniques employed.

    by bunnie at September 30, 2017 03:34 PM

    Winner, Name that Ware August 2017

    The ware for August 2017 is the controller IC for a self-flashing (two-pin, T1 case) RGB LED. It’s photographed through the lens of the LED, which is why the die appears so distorted. Somehow, Piotr — the first poster — guessed it on the first try without much explanation. Congrats, email me for your prize!

    by bunnie at September 30, 2017 03:32 PM

    September 28, 2017

    Free Electrons

    Free Electrons opens a new office in Lyon, France

    After Toulouse and Orange, Lyon is the third city chosen for opening a Free Electrons office. Since September 1st of this year (2017), Alexandre Belloni and Grégory Clement have been working more precisely in Oullins close to the subway and the train station. It is the first step to make the Lyon team grow, with the opportunity to welcome interns and engineers.


    Their new desks are already crowded by many boards running our favorite system.

    by Gregory Clement at September 28, 2017 07:04 AM

    September 27, 2017

    Free Electrons

    Mali OpenGL support on Allwinner platforms with mainline Linux

    As most people know, getting GPU-based 3D acceleration to work on ARM platforms has always been difficult, due to the closed nature of the support for such GPUs. Most vendors provide closed-source binary-only OpenGL implementations in the form of binary blobs, whose quality depend on the vendor.

    This situation is getting better and better through vendor-funded initiatives like for the Broadcom VC4 and VC5, or through reverse engineering projects like Nouveau on Tegra SoCs, Etnaviv on Vivante GPUs, Freedreno on Qualcomm’s. However there are still GPUs where you do not have the option to use a free software stack: PowerVR from Imagination Technologies and Mali from ARM (even though there is some progress on the reverse engineering effort).

    Allwinner SoCs are using either a Mali GPU from ARM or a PowerVR from Imagination Technologies, and therefore, support for OpenGL on those platforms using a mainline Linux kernel has always been a problem. This is also further complicated by the fact that Allwinner is mostly interested in Android, which uses a different C library that avoids its use in traditional glibc-based systems (or through the use of libhybris).

    However, we are happy to announce that Allwinner gave us clearance to publish the userspace binary blobs that allows to get OpenGL supported on Allwinner platforms that use a Mali GPU from ARM, using a recent mainline Linux kernel. Of course, those are closed source binary blobs and not a nice fully open-source solution, but it nonetheless allows everyone to have OpenGL support working, while taking advantage of all the benefits of a recent mainline Linux kernel. We have successfully used those binary blobs on customer projects involving the Allwinner A33 SoCs, and they should work on all Allwinner SoCs using the Mali GPU.

    In order to get GPU support to work on your Allwinner platform, you will need:

    • The kernel-side driver, available on Maxime Ripard’s Github repository. This is essentially the Mali kernel-side driver from ARM, plus a number of build and bug fixes to make it work with recent mainline Linux kernels.
    • The Device Tree description of the GPU. We introduced Device Tree bindings for Mali GPUs in the mainline kernel a while ago, so that Device Trees can describe such GPUs. Such description has been added for the Allwinner A23 and A33 SoCs as part of this commit.
    • The userspace blob, which is available on Free Electrons GitHub repository. It currently provides the r6p2 version of the driver, with support for both fbdev and X11 systems. Hopefully, we’ll gain access to newer versions in the future, with additional features (such as GBM support).

    If you want to use it in your system, the first step is to have the GPU definition in your device tree if it’s not already there. Then, you need to compile the kernel module:

    git clone https://github.com/mripard/sunxi-mali.git
    cd sunxi-mali
    export CROSS_COMPILE=$TOOLCHAIN_PREFIX
    export KDIR=$KERNEL_BUILD_DIR
    export INSTALL_MOD_PATH=$TARGET_DIR
    ./build.sh -r r6p2 -b
    ./build.sh -r r6p2 -i
    

    It should install the mali.ko Linux kernel module into the target filesystem.

    Now, you can copy the OpenGL userspace blobs that match your setup, most likely the fbdev or X11-dma-buf variant. For example, for fbdev:

    git clone https://github.com/free-electrons/mali-blobs.git
    cd mali-blobs
    cp -a r6p2/fbdev/lib/lib_fb_dev/lib* $TARGET_DIR/usr/lib
    

    You should be all set. Of course, you will have to link your OpenGL applications or libraries against those user-space blobs. You can check that everything works using OpenGL test programs such as es2_gears for example.

    by Maxime Ripard at September 27, 2017 09:34 AM

    September 21, 2017

    Elphel

    Long range multi-view stereo camera with 4 sensors

    Figure 1. Four sensor stereo camera CAD model


    Four-camera stereo rig prototype is capable of measuring distances thousands times exceeding the camera baseline over wide (60 by 45 degrees) field of view. With 150 mm distance between lenses it provides ranging data at 200 meters with 10% accuracy, production units will have higher accuracy. Initial implementation uses software post-processing, but the core part of the software (tile processor) is designed as FPGA simulation and will be moved to the actual FPGA of the camera for the real time applications.

    Scroll down or just hyper-jump to Scene viewer for the links to see example images and reconstructed scenes.

    Background

    Most modern advances in the area of the visual 3d reconstruction are related to structure from motion (SfM) where high quality models are generated from the image sequences, including those from the uncalibrated cameras (such as cellphone ones). Another fast growing applications depend on the active ranging with either LIDAR scanning technology or time of flight (ToF) sensors.

    Each of these methods has its limitations and while widespread smart phone cameras attracted most of the interest in the algorithms and software development, there are some applications where the narrow baseline (distance between the sensors is much smaller, than the distance to the objects) technology has advantages.

    Such applications include autonomous vehicle navigation where other objects in the scene are moving and 3-d data is needed immediately (not when the complete image sequence is captured), and the elements to be ranged are ahead of the vehicle so previous images would not help much. ToF sensors are still very limited in range (few meters) and the scanning LIDAR systems are either slow to update or have very limited field of view. Passive (visual only) ranging may be desired for military applications where the system should stay invisible by not shining lasers around.

    Technology snippets

    Narrow baseline and subpixel resolution

    The main challenge for the narrow baseline systems is that the distance resolution is much worse than the lateral one. The minimal resolved 3d element, voxel is very far from resembling a cube (as 2d pixels are usually squares) – with the dimensions we use: pixel size – 0.0022 mm, lens focal length f = 4.5 mm and the baseline of 150 mm such voxel at 100 m distance is 50 mm high by 50 mm wide and 32 meters deep. The good thing is that while the lateral resolution generally is just one pixel (can be better only with additional knowledge about the object), the depth resolution can be improved with reasonable assumptions by an order of magnitude by using subpixel resolution. It is possible when there are multiple shifted images of the same object (that for such high range to baseline ratio can safely be assumed fronto-parallel) and every object is presented in each image by multiple pixels. With 0.1 pixel resolution in disparity (or shift between the two images) the depth dimension of the voxel at 100 m distance is 3.2 meters. And as we need multiple pixel objects for the subpixel disparity resolution, the voxel lateral dimensions increase (there is a way to restore the lateral resolution to a single pixel in most cases). With fixed-width window for the image matching we use 8×8 pixel grid (16×16 pixel overlapping tiles) similar to what is used by some image/video compression algorithms (such as JPEG) the voxel dimensions at 100 meter range become 0.4 m x 0.4 m x 3.2 m. Still not a cube, but the difference is significantly less dramatic.

    Subpixel accuracy and the lens distortions

    Matching images with subpixel accuracy requires that lens optical distortion of each lens is known and compensated with the same or better precision. Most popular way to present lens distortions is to use radial distortion model where relation of distorted and ideal pin-hole camera image is expressed as polynomial of point radius, so in polar coordinates the angle stays the same while the radius changes. Fisheye lenses are better described with “f-theta” model, where linear radial distance in the focal plane corresponds to the angle between the lens axis and ray to the object.

    Such radial models provide accurate results only with ideal lens elements and when such elements are assembled so that the axis of each individual lens element precisely matches axes of the other elements – both in position and orientation. In the real lenses each optical element has minor misalignment, and that limits the radial model. For the lenses we had dealt with and with 5MPix sensors it was possible to get down to 0.2 – 0.3 pixels, so we supplemented the radial distortion described by the analytical formula with the table-based residual image correction. Such correction reduced the minimal resolved disparity to 0.05 – 0.08 pixels.

    Fixed vs. variable window image matching and FPGA

    Modern multi-view stereo systems that work with wide baselines use elaborate algorithms with variable size windows when matching image pairs, down to single pixels. They aggregate data from the neighbor pixels at later processing stages, that allows them to handle occlusions and perspective distortions that make paired images different. With the narrow baseline system, ranging objects at distances that are hundreds to thousands times larger than the baseline, the difference in perspective distortions of the images is almost always very small. And as the only way to get subpixel resolution requires matching of many pixels at once anyway, use of the fixed size image tiles instead of the individual pixels does not reduce flexibility of the algorithm much.

    Processing of the fixed-size image tiles promises significant advantage – hardware-accelerated pixel-level tile processing combined with the higher level software that operates with the per-tile data rather than with per-pixel one. Tile processing can be implemented within the FPGA-friendly stream processing paradigm leaving decision making to the software. Matching image tiles may be implemented using methods similar to those used for image and especially video compression where motion vector estimation is similar to calculation of the disparity between the stereo images and similar algorithms may be used, such as phase-only correlation (PoC).

    Two dimensional array vs. binocular and inline camera rigs

    Usually stereo cameras or fixed baseline multi-view stereo are binocular systems, with just two sensors. Less common systems have more than two lenses positioned along the same line. Such configurations improve the useful camera range (ability to measure near and far objects) and reduce ambiguity when dealing with periodic object structures. Even less common are the rigs where the individual cameras form a 2d structure.

    In this project we used a camera with 4 sensors located in the corners of a square, so they are not co-linear. Correlation-based matching of the images depends on the detailed texture in the matched areas of the images – perfectly uniform objects produce no data for depth estimation. Additionally some common types of image details may be unsuitable for certain orientations of the camera baselines. Vertical concrete pole can be easily correlated by the two horizontally positioned cameras, but if the baseline is turned vertical, the same binocular camera rig would fail to produce disparity value. Similar is true when trying to capture horizontal features with the horizontal binocular system – such predominantly horizontal features are common when viewing near flat horizontal surfaces at high angles of incidents (almost parallel to the view direction).

    With four cameras we process four image pairs – 2 horizontal (top and bottom) and 2 vertical (right and left), and depending on the application requirements for particular image region it is possible to combine correlation results of all 4 pairs, or just horizontal and vertical separately. When all 4 baselines have equal length it is easier to combine image data before calculating the precise location of the correlation maximums – 2 pairs can be combined directly, and the 2 others after rotating tiles by 90 degrees (swapping X and Y directions, transposing the tiles 2d arrays).

    Image rectification and resampling

    Many implementations of the multi-view stereo processing start with the image rectification that involves correction for the perspective and lens distortions, projection of the individual images to the common plane. Such projection simplifies image tiles matching by correlation, but as it involves resampling of the images, it either reduces resolution or requires upsampling and so increases required memory size and processing complexity.

    This implementation does not require full de-warping of the images and related resampling with fractional pixel shifts. Instead we split geometric distortion of each lens into two parts:

    • common (average) distortion of all four lenses approximated by analytical radial distortion model, and
    • small residual deviation of each lens image transformation from the common distortion model

    Common radial distortion parameters are used to calculate matching tile location in each image, and while integer rounded pixel shifts of the tile centers are used directly when selecting input pixel windows, the fractional pixel remainders are preserved and combined with the other image shifts in the FPGA tile processor. Matching of the images is performed in this common distorted space, the tile grid is also mapped to this presentation, not to the fully rectified rectilinear image.

    Small individual lens deviations from the common distortion model are smooth 2-d functions over the 2-d image plane, they are interpolated from the calibration data stored for the lower resolution grid.

    We use low distortion sorted lenses with matching focal lengths to make sure that the scale mismatch between the image tiles is less than tile size in the target subpixel intervals (0.1 pix). Low distortion requirement extends the distances range to the near objects, because with the higher disparity values matching tiles in the different images land to the differently distorted areas. Focal length matching allows to use modulated complex lapped transform (CLT) that similar to discrete Fourier transform (DFT) is invariant to shifts, but not to scaling (log-polar coordinates are not applicable here, as such transformation would deny shift invariance).

    Enhancing images by correcting optical aberrations with space-variant deconvolution

    Matching of the images acquired with the almost identical lenses is rather insensitive to the lens aberrations that degrade image quality (mostly reduce sharpness), especially in the peripheral image areas. Aberration correction is still needed to get sharp textures in the result 3d models over full field of view, the resolution of the modern sensors is usually better than what lenses can provide. Correction can be implemented with space-variant (different kernels for different areas of the image) deconvolution, we routinely use it for post-processing of Eyesis4π images. The DCT-based implementation is described in the earlier blog post.

    Space-variant deconvolution kernels can absorb (be combined with during calibration processing) the individual lens deviations from the common distortion model, described above. Aberration correction and image rectification to the common image space can be performed simultaneously using the same processing resources.

    Two dimensional vs. single dimensional matching along the epipolar lines

    Common approach for matching image pairs is to replace the two-dimensional correlation with a single-dimensional task by correlating pixels along the epipolar lines that are just horizontal lines for horizontally built binocular systems with the parallel optical axes. Aggregation of the correlation maximums locations between the neighbor parallel lines of pixels is preformed in the image pixels domain after each line is processed separately.

    For tile-based processing it is beneficial to perform a full 2-d correlation as the phase correlation is performed in the frequency domain, and after the pointwise multiplication during aberration correction the image tiles are already available in the 2d frequency domain. Two dimensional correlation implies aggregation of data from multiple scan lines, it can tolerate (and be used to correct) small lens misalignments, with appropriate filtering it can be used to detect (and match) linear features.

    Implementation

    Prototype camera

    Experimental camera looks similar to Elphel regular H-camera – we just incorporated different sensor front ends (3d CAD model) that are used in Eyesis4π and added adjustment screws to align optical axes of the lenses (heading and tilt) and orientations of the image sensors (roll). Sensors are 5 Mpix 1/2″ format On Semiconductor MT9P006, lenses – Evetar N125B04530W.

    We selected lenses with the same focal length within 1%, and calibrated the camera using our standard camera rotation machine and the target pattern. As we do not yet have production adjustment equipment and software, the adjustment took several iterations: calibrating the camera and measuring extrinsic parameters of each sensor front end, then rotating each of the adjustment screws according to spreadsheet-calculated values, and then re-running the whole calibration process again. Finally the calibration results: radial distortion parameters, SFE extrinsic parameters, vignetting and deconvolution kernels were converted to the form suitable for run-time application (now – during post-processing of the captured images).

    Figure 2. Camera block diagram

    This prototype still uses 3d-printed parts and such mounts proved to be not stable enough, so we had to add field calibration and write code for bundle adjustment of the individual imagers orientations from the 2-d correlation data for each of the 4 individual pairs.

    Camera performance depends on the actual mechanical stability, software-compensation can only partially mitigate this misalignment problem and the precision of the distance measurements was reduced when cameras went off by more than 20 pixels after being carried in a backpack. Nevertheless the scene reconstruction remained possible.

    Software

    Multi-view stereo rigs are capable of capturing dynamic scenes so our goal is to make a real-time system with most of the heavy-weight processing be done in the FPGA.

    One of the major challenges here is how to combine parallel and stream processing capabilities of the FPGA with the flexibility of the software needed for implementation of the advanced 3d reconstruction algorithms. This approach is to use the FPGA-based tile processor to perform uniform operations on the lists of “tiles” – fixed square overlapping windows in the images. FPGA processes tile data at the pixel level, while software operates the whole tiles.

    Figure 2 shows the overall block diagram of the camera, Figure 3 illustrates details of the tile processor.

    Figure 3. FPGA tile processor

    Initial implementation does not contain actual FPGA processing, so far we only tested in FPGA some of the core functions – two dimensional 8×8 DCT-IV needed for both 16×16 CLT and ICLT. Current code consists of the two separate parts – one part (tile processor) simulates what will be moved to the FPGA (it handles image tiles at the pixel level), and the other one is what will remain software – it operates on the tile level and does not deal with the individual pixels. These two parts interact using shared system memory, tile processor has exclusive access to the dedicated image buffer and calibration data.

    Each tile is 16×16 pixels square with 8 pixel overlap, software prepares tile list including:

    • tile center X,Y (for the virtual “center” image),
    • center disparity, so the each of the 4 image tiles will be shifted accordingly, and
    • the code of operation(s) to be performed on that tile.

    Figure 4. Correlation processor

    Tile processor performs all or some (depending on the tile operation codes) of the following operations:

    • Reads the tile tasks from the shared system memory.
    • Calculates locations and loads image and calibration data from the external image buffer memory (using on-chip memory to cache data as the overlapping nature of the tiles makes each pixel to participate on average in 4 neighbor tiles).
    • Converts tiles to frequency domain using CLT based on 2d DCT-IV and DST-IV.
    • Performs aberration correction in the frequency domain by pointwise multiplication by the calibration kernels.
    • Calculates correlation-related data (Figure 4) for the tile pairs, resulting in tile disparity and disparity confidence values for all pairs combined, and/or more specific correlation types by pointwise multiplication, inverse CLT to the pixel domain, filtering and local maximums extraction by quadratic interpolation or windowed center of mass calculation.
    • Calculates combined texture for the tile (Figure 5), using alpha channel to mask out pixels that do not match – this is the way how to effectively restore single-pixel lateral resolution after aggregating individual pixels to tiles. Textures can be combined after only programmed shifts according to specified disparity, or use additional shift calculated in the correlation module.
    • Calculates other integral values for the tiles (Figure 5), such as per-channel number of mismatched pixels – such data can be used for quick second-level (using tiles instead of pixels) correlation runs to determine which 3d volumes potentially have objects and so need regular (pixel-level) matching.
    • Finally tile processor saves results: correlation values and/or texture tile to the shared system memory, so software can access this data.

    Figure 5. Texture processor

    Single tile processor operation deals with the scene objects that would be projected to this tile’s 16×16 pixels square on the sensor of the virtual camera located in the center between the four actual physical cameras. The single pass over the tile data is limited not just laterally, but in depth also because for the tiles to correlate they have to have significant overlap. 50% overlap corresponds to the correlation offset range of ±8 pixels, better correlation contrast needs 75% overlap or ±4 pixels. The tile processor “probes” not all the voxels that project to the same 16×16 window of the virtual image, but only those that belong to the certain distance range – the distances that correspond to the disparities ±4 pixels from the value provided for the tile.

    That means that a single processing pass over a tile captures data in a disparity space volume, or a macro-voxel of 8 pixels wide by 8 pixels high by 8 pixels deep (considering the central part of the overlapping volumes). And capturing the whole scene may require multiple passes for the same tile with different disparity. There are ways how to avoid full range disparity sweep (with 8 pixel increments) for all tiles – following surfaces and detecting occlusions and discontinuities, second-level correlation of tiles instead of the individual pixels.

    Another reason for the multi-pass processing of the same tile is to refine the disparity measured by correlation. When dealing with subpixel coordinates of the correlation maximums – either located by quadratic approximation or by some form of center of mass evaluation, the calculated values may have bias and disparity histograms reveal modulation with the pixel period. Second “refine” pass, where individual tiles are shifted by the disparity measured in the previous pass reduces the residual offset of the correlation maximum to a fraction of a pixel and mitigates this type of bias. Tile shift here means a combination of the integer pixel shift of the source images and the fractional (in the ±0.5 pixel range) shift that is performed in the frequency domain by multiplication by the cosine/sine phase rotator.

    Total processing time and/or required FPGA resources linearly depend on the number of required tile processor operations and the software may use several methods to reduce this number. In addition to the two approaches mentioned above (following surfaces and second-level correlation) it may be possible to reduce the field of view to a smaller area of interest, predict current frame scene from the previous frames (as in 2d video compression) – tile processor paradigm preserves flexibility of the various algorithms that may be used in the scene 3d reconstruction software stack.

    Scene viewer

    The viewer for the reconstructed scenes is here: https://community.elphel.com/3d+map (viewer source code).

    Figure 6. 3d+map index page

    Index page shows a map (you may select from several providers) with the markers for the locations of the captured scenes. On the left there is a vertical ribbon of the thumbnails – you may scroll it with a mouse wheel or by dragging.

    Thumbnails are shown only for the markers that fit on screen, so zooming in on the map may reduce number of the visible thumbnails. When you select some thumbnail, the corresponding marker opens on the map, and one or several scenes are shown – one line per each scene (identified by the Unix timestamp code with fractional seconds) captured at the same locations.

    The scene that matches the selected thumbnail is highlighted (as 4-th line in the Figure 6). Some scenes have different versions of reconstruction from the same source images – they are listed in the same line (like first line in the Figure 6). Links lead to the viewers of the selected scene/version.

    Figure 7. Selection of the map / satellite imagery provider

    We do not have ground truth models for the captured scenes build with the active scanners. Instead as the most interesting is ranging of the distant objects (hundreds of meters) it is possible to use publicly available satellite imagery and match it to the captured models. We had ideal view from Elphel office window – each crack on the pavement was visible in the satellite images so we could match them with the 3d model of the scene. Unfortunately they ruined it recently by replacing asphalt :-).

    The scene viewer combines x3dom representation of the 3d scene and the re-sizable overlapping map view. You may switch the map imagery provider by clicking on the map icon as shown in the Figure 7.

    The scene and map views are synchronized to each other, there are several ways of navigation in either 3d or map area:

    • drag the 3d view to rotate virtual camera without moving;
    • move cross-hair icon in the map view to rotate camera around vertical axis;
    • toggle button and adjust camera view elevation;
    • use scroll wheel over the 3d area to change camera zoom (field of view is indicated on the map);
    • drag with middle button pressed in the 3d view to move camera perpendicular to the view direction;
    • drag the the camera icon (green circle) on the map to move camera horizontally;
    • toggle button and move the camera vertically;
    • press a hotkey t over the 3d area to reset to the initial view: set azimuth and elevation same as captured;
    • press a hotkey r over the 3d area to set view azimuth as captured, elevation equal to zero (horizontal view).

    Figure 8. 3D model to map comparison

    Comparison of the 3d scene model and the map uses ball markers. By default these markers are one meter in diameter, the size can be changed on the settings () page.

    Moving pointer over the 3d area with Ctrl key pressed causes the ball to follow the cursor at a distance where the view line intersects the nearest detected surface in the scene. It simultaneously moves the corresponding marker over the map view and indicates the measured distance.

    Ctrl-click places the ball marker on the 3d scene and on the map. It is then possible to drag the marker over the map and read the ground truth distance. Dragging the marker over the 3d scene updates location on the map, but not the other way around, in edit mode mismatch data is used to adjust the captured scene location and orientation.

    Program settings used during reconstruction limit the scene far distance to z = 1000 meters, all more distant objects are considered to be located at infinity. X3d allows to use images at infinity using backdrop element, but it is not flexible enough and is not supported by some other programs. In most models we place infinity textures to a large billboard at z = 10,000 meters, and it is where the ball marker will appear if placed on the sky or other far objects.

    Figure 9. Settings and link to four images

    The settings page () shown in the Figure 9 has a link to the four-image viewer (Figure 10). These four images correspond to the captured views and are almost “raw images” used for scene reconstruction. These images were subject to the optical aberration correction and are partially rectified – they are rendered as if they were captured by the same camera that has only strictly polynomial radial distortion.

    Such images are not actually used in the reconstruction process, they are rendered only for the debug and demonstration purposes. The equivalent data exists in the tile processor only in the frequency domain form as an intermediate result, and was subject to just linear processing (to avoid possible unintended biases) so the images have some residual locally-checkerboard pattern that is due to the Bayer mosaic filter (discussed in the earlier blog). Textures that are generated from the combination of all four images have the contrast of such pattern significantly lower. It is possible to add some non-linear filtering at the very last stage of the texture generation.

    Each scene model has a download link for the archive that contains the model itself as *.x3d file and Wavefront *.obj and *.mtl as well as the corresponding RGBA texture files as PNG images. Initially I missed the fact that x3d and obj formats have opposite direction of surface normals for the same triangular faces, so almost half of the Wavefront files still have incorrect (opposite direction) surface normals.

    Results

    Our initial plan was to test algorithms for the tile processor before implementing them in FPGA. The tile processor provides data for the disparity space image (DSI) – confidence value of having certain disparity for specified 2d position in the image, it also generates texture tiles.

    When the tile processor code was written and tested, we still needed some software to visualize the results. DSI itself seemed promising (much better coverage than what I had with earlier experiments with binocular images), but when I tried to convert these textured tiles into viewable x3d model directly, it was a big disappointment. Result did not look like a 3d scene – there were many narrow triangles that made sense only when viewed almost directly from the camera actual location, a small lateral viewpoint movement – and the image was falling apart into something unrecognizable.

    Figure 10. Four channel images (click for actual viewer with zoom/pan capability)

    I was not able to find ready to use code and the plan to write a quick demo for the tile processor and generated DSI seemed less and less realistic. Eventually it took at least three times longer to get somewhat usable output than to develop DCT-based tile processor code itself.

    Current software is still incomplete, lacks many needed features (it even does not cut off background so wires over the sky steal a lot of surrounding space), it runs slow (several minutes per single scene), but it does provide a starting point to evaluate performance of the long range 4-camera multi-view stereo system. Much of the intended functionality does not work without more parameter tuning, but we decided to postpone improvements to the next stage (when we will have cameras that are more stable mechanically) and instead try to capture more of very different scenes, process them in batch mode (keeping the same parameter values for all new scenes) and see what will be the output.

    As soon as the program was able to produce somewhat coherent 3d model from the very first image set captured through Elphel office window, Oleg Dzhimiev started development of the web application that allows to match the models with the map data. After adding more image sets I noticed that the camera calibration did not hold. Each individual sub-camera performed nicely (they use thermally compensated mechanical design), but their extrinsic parameters did change and we had to add code for field calibration that uses image themselves. The best accuracy in disparity measurement over the field of view still requires camera poses to match ones used at full calibration, so later scenes with more developed misalignment (>20 pixels) are less precise than earlier (captured in Salt Lake City).

    We do not have an established method to measure ranging precision for different distances to object – the disparity values are calculated together with the confidence and in lower confidence areas the accuracy is lower, including places where no ranging is possible due to the complete absence of the visible details in the images. Instead it is possible to compare distances in various scene models to those on the map and see where such camera is useful. With 0.1 pixel disparity resolution and 150 mm baseline we should be able to measure 300 m distances with 10% accuracy, and for many captured scene objects it already is not much worse. We now placed orders to machine the new camera parts that are needed to build a more mechanically stable rig. And parallel to upgrading the hardware, we’ll start migrating the tile processor code from Java to Verilog.

    And what’s next? Elphel goal is to provide our users with the high performance hackable products and freedom to modify them in the ways and for the purposes we could not imagine ourselves. But it is fun to fantasize about at least some possible applications:

    • Obviously, self-driving cars – increased number of cameras located in a 2d pattern (square) results in significantly more robust matching even with low-contrast textures. It does not depend on sequential scanning and provides simultaneous data over wide field of view. Calculated confidence of distance measurements tells when alternative (active) ranging methods are needed – that would help to avoid infamous accident with a self-driving car that went under a truck.
    • Visual odometry for the drones would also benefit from the higher robustness of image matching.
    • Rovers on Mars or other planets using low-power passive (visual based) scene reconstruction.
    • Maybe self-flying passenger multicopters in the heavy 3d traffic? Sure they will all be equipped with some transponders, but what about aerial roadkills? Like a flock of geese that forced water landing.
    • High speed boating or sailing over uneven seas with active hydrofoils that can look ahead and adjust to the future waves.
    • Landing on the asteroids for physical (not just Bitcoin) mining? With 150 mm baseline such camera can comfortably operate within several hundred meters from the object, with 1.5 m that will scale to kilometers.
    • Cinematography: post-production depth of field control that would easily beat even the widest format optics, HDR with a pair of 4-sensor cameras, some new VFX?
    • Multi-spectral imaging where more spatially separate cameras with different bandpass filters can be combined to the same texture in the 3d scene.
    • Capturing underwater scenes and measuring how far the sea creatures are above the bottom.

    by Andrey Filippov at September 21, 2017 05:40 AM

    September 12, 2017

    Open Hardware Repository

    White Rabbit - 12-09-2017: PTP Trackhound smells the White Rabbit

    The software PTP Track Hound which can capture and analyze PTP network traffic now understands White Rabbit TLVs. So the Track Hound can now sniff the tracks that the White Rabbit leaves behind.
    Track Hound is made freely available by Meinberg. One may want to know that the source code is not available under an Open Licence.

    by Erik van der Bij (Erik.van.der.Bij@cern.ch) at September 12, 2017 07:35 AM

    September 06, 2017

    Free Electrons

    Free Electrons at the Embedded Linux Conference Europe

    The next Embedded Linux Conference Europe will take place on October 23-25 in Prague, Czech Republic.

    Embedded Linux Conference Europe 2017

    As usual, a significant part of the Free Electrons engineering team will participate to the conference and give talks on various topics:

    In addition to the main ELCE conference, Thomas Petazzoni will participate to the Buildroot Developers Days, a 2-day hackaton organized on Saturday and Sunday prior to ELCE, and will participate to the Device Tree workshop organized on Thursday afternoon.

    Once again, we’re really happy to participate to this conference, and looking forward to meeting again with a large number of Linux kernel and embedded Linux developers!

    by Thomas Petazzoni at September 06, 2017 11:56 AM

    September 05, 2017

    Free Electrons

    Linux 4.13 released, Free Electrons contributions

    Linux 4.13 was released last Sunday by Linus Torvalds, and the major new features of this release were described in details by LWN in a set of articles: part 1 and part 2.

    This release gathers 13006 non-merge commits, amongst which 239 were made by Free Electrons engineers. According to the LWN article on 4.13 statistics, this makes Free Electrons the 13th contributing company by number of commits, the 10th by lines changed.

    The most important contributions from Free Electrons for this release have been:

    • In the RTC subsystem
      • Alexandre Belloni introduced a new method for registering RTC devices, with one step for the allocation, and one step for the registration itself, which allows to solve race conditions in a number of drivers.
      • Alexandre Belloni added support for exposing the non-volatile memory found in some RTC devices through the Linux kernel nvmem framework, making them usable from userspace. A few drivers were changed to use this new mechanism.
    • In the MTD/NAND subsystem
      • Boris Brezillon did a large number of fixes and minor improvements in the NAND subsystem, both in the core and in a few drivers.
      • Thomas Petazzoni contributed the support for on-die ECC, specifically with Micron NANDs. This allows to use the ECC calculation capabilities of the NAND chip itself, as opposed to using software ECC (calculated by the CPU) or ECC done by the NAND controller.
      • Thomas Petazzoni contributed a few improvements to the FSMC NAND driver, used on ST Spear platforms. The main improvement is to support the ->setup_data_interface() callback, which allows to configure optimal timings in the NAND controller.
    • Support for Allwinner ARM platforms
      • Alexandre Belloni improved the sun4i PWM driver to use the so-called atomic API and support hardware read out.
      • Antoine Ténart improved the sun4i-ss cryptographic engine driver to support the Allwinner A13 processor, in addition to the already supported A10.
      • Maxime Ripard contributed HDMI support for the Allwinner A10 processor (in the DRM subsystem) and a number of related changes to the Allwinner clock support.
      • Quentin Schulz improved the support for battery charging through the AXP20x PMIC, used on Allwinner platforms.
    • Support for Atmel ARM platforms
      • Alexandre Belloni added suspend/resume support for the Atmel SAMA5D2 clock driver. This is part of a larger effort to implement the backup mode for the SAMA5D2 processor.
      • Alexandre Belloni added suspend/resume support in the tcb_clksrc driver, used as for clocksource and clockevents on Atmel SAMA5D2.
      • Alexandre Belloni cleaned up a number of drivers, removing support for non-DT probing, which is possible now that the AVR32 architecture has been dropped. Indeed, the AVR32 processors used to share the same drivers as the Atmel ARM processors.
      • Alexandre Belloni added the core support for the backup mode on Atmel SAMA5D2, a suspend/resume state with significant power savings.
      • Boris Brezillon switched Atmel platforms to use the new binding for the EBI and NAND controllers.
      • Boris Brezillon added support for timing configuration in the Atmel NAND driver.
      • Quentin Schulz added suspend/resume support to the Bosch m_can driver, used on Atmel platforms.
    • Support for Marvell ARM platforms
      • Antoine Ténart contributed a completely new driver (3200+ lines of code) for the Inside Secure EIP197 cryptographic engine, used in the Marvell Armada 7K and 8K processors. He also subsequently contributed a number of fixes and improvements for this driver.
      • Antoine Ténart improved the existing mvmdio driver, used to communicate with Ethernet PHYs over MDIO on Marvell platforms to support the XSMI variant found on Marvell Armada 7K/8K, used to communicate with 10G capable PHYs.
      • Antoine Ténart contributed minimal support for 10G Ethernet in the mvpp2 driver, used on Marvell Armada 7K/8K. For now, the driver still relies on low-level initialization done by the bootloader, but additional changes in 4.14 and 4.15 will remove this limitation.
      • Grégory Clement added a new pinctrl driver to configure the pin-muxing on the Marvell Armada 37xx processors.
      • Grégory Clement did a large number of changes to the clock drivers used on the Marvell Armada 7K/8K processors to prepare the addition of pinctrl support.
      • Grégory Clement added support for Marvell Armada 7K/8K to the existing mvebu-gpio driver.
      • Thomas Petazzoni added support for the ICU, a specialized interrupt controller used on the Marvell Armada 7K/8K, for all devices located in the CP110 part of the processor.
      • Thomas Petazzoni removed a work-around to properly resume per-CPU interrupts on the older Marvell Armada 370/XP platforms.
    • Support for RaspberryPi platforms
      • Boris Brezillon added runtime PM support to the HDMI encoder driver used on RaspberryPi platforms, and contributed a few other fixes to the VC4 DRM driver.

    It is worth mentioning that Miquèl Raynal, recently hired by Free Electrons, sees his first kernel patch merged: nand: fix wrong default oob layout for small pages using soft ecc.

    Free Electrons engineers are not only contributors, but also maintainers of various subsystems in the Linux kernel, which means they are involved in the process of reviewing, discussing and merging patches contributed to those subsystems:

    • Maxime Ripard, as the Allwinner platform co-maintainer, merged 113 patches from other contributors
    • Boris Brezillon, as the MTD/NAND maintainer, merged 62 patches from other contributors
    • Alexandre Belloni, as the RTC maintainer and Atmel platform co-maintainer, merged 57 patches from other contributors
    • Grégory Clement, as the Marvell EBU co-maintainer, merged 47 patches from other contributors

    Here is the commit by commit detail of our contributors to 4.13:

    by Thomas Petazzoni at September 05, 2017 07:21 AM

    September 02, 2017

    Harald Welte

    Purism Librem 5 campaign

    There's a new project currently undergoing crowd funding that might be of interest to the former Openmoko community: The Purism Librem 5 campaign.

    Similar to Openmoko a decade ago, they are aiming to build a FOSS based smartphone built on GNU/Linux without any proprietary drivers/blobs on the application processor, from bootloader to userspace.

    Furthermore (just like Openmoko) the baseband processor is fully isolated, with no shared memory and with the Linux-running application processor being in full control.

    They go beyond what we wanted to do at Openmoko in offering hardware kill switches for camera/phone/baseband/bluetooth. During Openmoko days we assumed it is sufficient to simply control all those bits from the trusted Linux domain, but of course once that might be compromised, a physical kill switch provides a completely different level of security.

    I wish them all the best, and hope they can leave a better track record than Openmoko. Sure, we sold some thousands of phones, but the company quickly died, and the state of software was far from end-user-ready. I think the primary obstacles/complexities are verification of the hardware design as well as the software stack all the way up to the UI.

    The budget of ~ 1.5 million seems extremely tight from my point of view, but then I have no information about how much Puri.sm is able to invest from other sources outside of the campaign.

    If you're a FOSS developer with a strong interest in a Free/Open privacy-first smartphone, please note that they have several job openings, from Kernel Developer to OS Developer to UI Developer. I'd love to see some talents at work in that area.

    It's a bit of a pity that almost all of the actual technical details are unspecified at this point (except RAM/flash/main-cpu). No details on the cellular modem/chipset used, no details on the camera, neither on the bluetooth chipset, wifi chipset, etc. This might be an indication of the early stage of their plannings. I would have expected that one has ironed out those questions before looking for funding - but then, it's their campaign and they can run it as they see it fit!

    I for my part have just put in a pledge for one phone. Let's see what will come of it. In case you feel motivated by this post to join in: Please keep in mind that any crowdfunding campaign bears significant financial risks. So please make sure you made up your mind and don't blame my blog post for luring you into spending money :)

    by Harald Welte at September 02, 2017 10:00 PM

    Uwe Hermann

    Website Reconstruction

    My website/blog/photoblog has been in a stale and broken state for quite a while now; I’ve finally found some time to fix everything up.

    I’m rebuilding everyting from scratch (for various reasons) in Drupal 8, so for the time being various older pages and blog posts will not be available, but I’m continuously re-adding content until (more or less) everything is back up. Stay tuned!

    Comments will be disabled in the future, please contact me via email for feedback/comments on blog posts or the like, I’ll be updating the posts to reflect any feedback. Thanks!


    Comments or feedback? Please contact me via Mastodon: @uwehermann@fosstodon.org.

    September 02, 2017 12:36 PM

    September 01, 2017

    Harald Welte

    The sad state of voice support in cellular modems

    Cellular modems have existed for decades and come in many shapes and kinds. They contain the cellular baseband processor, RF frontend, protocol stack software and anything else required to communicate with a cellular network. Basically a phone without display or input.

    During the last decade or so, the vast majority of cellular modems come as LGA modules, i.e. a small PCB with all components on the top side (and a shielding can), which has contact pads on the bottom so you can solder it onto your mainboard. You can obtain them from vendors such as Sierra Wireless, u-blox, Quectel, ZTE, Huawei, Telit, Gemalto, and many others.

    In most cases, the vendors now also solder those modules to small adapter boards to offer the same product in mPCIe form-factor. Other modems are directly manufactured in mPCIe or NGFF aka m.2 form-factor.

    As long as those modems were still 2G / 2.5G / 2.75G, the main interconnection with the host (often some embedded system) was a serial UART. The Audio input/output for voice calls was made available as analog signals, ready to connect a microphone and spekaer, as that's what the cellular chipsets were designed for in the smartphones. In the Openmoko phones we also interfaced the audio of the cellular modem in analog, exactly for that reason.

    From 3G onwards, the primary interface towards the host is now USB, with the modem running as a USB device. If your laptop contains a cellular modem, you will see it show up in the lsusb output.

    From that point onwards, it would have made a lot of sense to simply expose the audio also via USB. Simply offer a multi-function USB device that has both whatever virutal serial ports for AT commands and network device for IP, and add a USB Audio device to it. It would simply show up as a "USB sound card" to the host, with all standard drivers working as expected. Sadly, nobody seems to have implemented this, at least not in a supported production version of their product

    Instead, what some modem vendors have implemented as an ugly hack is the transport of 8kHz 16bit PCM samples over one of the UARTs. See for example the Quectel UC-20 or the Simcom SIM7100 which implement such a method.

    All the others ignore any acess to the audio stream from software to a large part. One wonders why that is. From a software and systems architecture perspective it would be super easy. Instead, what most vendors do, is to expose a digital PCM interface. This is suboptimal in many ways:

    • there is no mPCIe standard on which pins PCM should be exposed
    • no standard product (like laptop, router, ...) with mPCIe slot will have anything connected to those PCM pins

    Furthermore, each manufacturer / modem seems to support a different subset of dialect of the PCM interface in terms of

    • voltage (almost all of them are 1.8V, while mPCIe signals normally are 3.3V logic level)
    • master/slave (almost all of them insist on being a clock master)
    • sample format (alaw/ulaw/linear)
    • clock/bit rate (mostly 2.048 MHz, but can be as low as 128kHz)
    • frame sync (mostly short frame sync that ends before the first bit of the sample)
    • endianness (mostly MSB first)
    • clock phase (mostly change signals at rising edge; sample at falling edge)

    It's a real nightmare, when it could be so simple. If they implemented USB-Audio, you could plug a cellular modem into any board with a mPCIe slot and it would simply work. As they don't, you need a specially designed mainboard that implements exactly the specific dialect/version of PCM of the given modem.

    By the way, the most "amazing" vendor seems to be u-blox. Their Modems support PCM audio, but only the solder-type version. They simply didn't route those signals to the mPCIe slot, making audio impossible to use when using a connectorized modem. How inconvenient.

    Summary

    If you want to access the audio signals of a cellular modem from software, then you either

    • have standard hardware and pick one very specific modem model and hope this is available sufficiently long during your application, or
    • build your own hardware implementing a PCM slave interface and then pick + choose your cellular modem

    On the Osmocom mpcie-breakout board and the sysmocom QMOD board we have exposed the PCM related pins on 2.54mm headers to allow for some separate board to pick up that PCM and offer it to the host system. However, such separate board hasn't been developed so far.

    by Harald Welte at September 01, 2017 10:00 PM

    First actual XMOS / XCORE project

    For many years I've been fascinated by the XMOS XCore architecture. It offers a surprisingly refreshing alternative virtually any other classic microcontroller architectures out there. However, despite reading a lot about it years ago, being fascinated by it, and even giving a short informal presentation about it once, I've so far never used it. Too much "real" work imposes a high barrier to spending time learning about new architectures, languages, toolchains and the like.

    Introduction into XCore

    Rather than having lots of fixed-purpose built-in "hard core" peripherals for interfaces such as SPI, I2C, I2S, etc. the XCore controllers have a combination of

    • I/O ports for 1/4/8/16/32 bit wide signals, with SERDES, FIFO, hardware strobe generation, etc
    • Clock blocks for using/dividing internal or external clocks
    • hardware multi-threading that presents 8 logical threads on each core
    • xCONNECT links that can be used to connect multiple processors over 2 or 5 wires per direction
    • channels as a means of communication (similar to sockets) between threads, whether on the same xCORE or a remote core via xCONNECT
    • an extended C (xC) programming language to make use of parallelism, channels and the I/O ports

    In spirit, it is like a 21st century implementation of some of the concepts established first with Transputers.

    My main interest in xMOS has been the flexibility that you get in implementing not-so-standard electronics interfaces. For regular I2C, UART, SPI, etc. there is of course no such need. But every so often one encounters some interface that's very rately found (like the output of an E1/T1 Line Interface Unit).

    Also, quite often I run into use cases where it's simply impossible to find a microcontroller with a sufficient number of the related peripherals built-in. Try finding a microcontroller with 8 UARTs, for example. Or one with four different PCM/I2S interfaces, which all can run in different clock domains.

    The existing options of solving such problems basically boil down to either implementing it in hard-wired logic (unrealistic, complex, expensive) or going to programmable logic with CPLD or FPGAs. While the latter is certainly also quite interesting, the learning curve is steep, the tools anything but easy to use and the synthesising time (and thus development cycles) long. Furthermore, your board design will be more complex as you have that FPGA/CPLD and a microcontroller, need to interface the two, etc (yes, in high-end use cases there's the Zynq, but I'm thinking of several orders of magnitude less complex designs).

    Of course one can also take a "pure software" approach and go for high-speed bit-banging. There are some ARM SoCs that can toggle their pins. People have reported rates like 14 MHz being possible on a Raspberry Pi. However, when running a general-purpose OS in parallel, this kind of speed is hard to do reliably over long term, and the related software implementations are going to be anything but nice to write.

    So the XCore is looking like a nice alternative for a lot of those use cases. Where you want a microcontroller with more programmability in terms of its I/O capabilities, but not go as far as to go full-on with FPGA/CPLD development in Verilog or VHDL.

    My current use case

    My current use case is to implement a board that can accept four independent PCM inputs (all in slave mode, i.e. clock provided by external master) and present them via USB to a host PC. The final goal is to have a board that can be combined with the sysmoQMOD and which can interface the PCM audio of four cellular modems concurrently.

    While XMOS is quite strong in the Audio field and you can find existing examples and app notes for I2S and S/PDIF, I couldn't find any existing code for a PCM slave of the given requirements (short frame sync, 8kHz sample rate, 16bit samples, 2.048 MHz bit clock, MSB first).

    I wanted to get a feeling how well one can implement the related PCM slave. In order to test the slave, I decided to develop the matching PCM master and run the two against each other. Despite having never written any code for XMOS before, nor having used any of the toolchain, I was able to implement the PCM master and PCM slave within something like ~6 hours, including simulation and verification. Sure, one can certainly do that in much less time, but only once you're familiar with the tools, programming environment, language, etc. I think it's not bad.

    The biggest problem was that the clock phase for a clocked output port cannot be configured, i.e. the XCore insists on always clocking out a new bit at the falling edge, while my use case of course required the opposite: Clocking oout new signals at the rising edge. I had to use a second clock block to generate the inverted clock in order to achieve that goal.

    Beyond that 4xPCM use case, I also have other ideas like finally putting the osmo-e1-xcvr to use by combining it with an XMOS device to build a portable E1-to-USB adapter. I have no clue if and when I'll find time for that, but if somebody wants to join in: Let me know!

    The good parts

    Documentation excellent

    I found the various pieces of documentation extremely useful and very well written.

    Fast progress

    I was able to make fast progress in solving the first task using the XMOS / Xcore approach.

    Soft Cores developed in public, with commit log

    You can find plenty of soft cores that XMOS has been developing on github at https://github.com/xcore, including the full commit history.

    This type of development is a big improvement over what most vendors of smaller microcontrollers like Atmel are doing (infrequent tar-ball code-drops without commit history). And in the case of the classic uC vendors, we're talking about drivers only. In the XMOS case it's about the entire logic of the peripheral!

    You can for example see that for their I2C core, the very active commit history goes back to January 2011.

    xSIM simulation extremely helpful

    The xTIMEcomposer IDE (based on Eclipse) contains extensive tracing support and an extensible near cycle accurate simulator (xSIM). I've implemented a PCM mater and PCM slave in xC and was able to simulate the program while looking at the waveforms of the logic signals between those two.

    The bad parts

    Unfortunately, my extremely enthusiastic reception of XMOS has suffered quite a bit over time. Let me explain why:

    Hard to get XCore chips

    While the product portfolio on on the xMOS website looks extremely comprehensive, the vast majority of the parts is not available from stock at distributors. You won't even get samples, and lead times are 12 weeks (!). If you check at digikey, they have listed a total of 302 different XMOS controllers, but only 35 of them are in stock. USB capable are 15. With other distributors like Farnell it's even worse.

    I've seen this with other semiconductor vendors before, but never to such a large extent. Sure, some packages/configurations are not standard products, but having only 11% of the portfolio actually available is pretty bad.

    In such situations, where it's difficult to convince distributors to stock parts, it would be a good idea for XMOS to stock parts themselves and provide samples / low quantities directly. Not everyone is able to order large trays and/or capable to wait 12 weeks, especially during the R&D phase of a board.

    Extremely limited number of single-bit ports

    In the smaller / lower pin-count parts, like the XU[F]-208 series in QFN/LQFP-64, the number of usable, exposed single-bit ports is ridiculously low. Out of the total 33 I/O lines available, only 7 can be used as single-bit I/O ports. All other lines can only be used for 4-, 8-, or 16-bit ports. If you're dealing primarily with serial interfaces like I2C, SPI, I2S, UART/USART and the like, those parallel ports are of no use, and you have to go for a mechanically much larger part (like XU[F]-216 in TQFP-128) in order to have a decent number of single-bit ports exposed. Those parts also come with twice the number of cores, memory, etc- which you don't need for slow-speed serial interfaces...

    Change to a non-FOSS License

    XMOS deserved a lot of praise for releasing all their soft IP cores as Free / Open Source Software on github at https://github.com/xcore. The License has basically been a 3-clause BSD license. This was a good move, as it meant that anyone could create derivative versions, whether proprietary or FOSS, and there would be virtually no license incompatibilities with whatever code people wanted to write.

    However, to my very big disappointment, more recently XMOS seems to have changed their policy on this. New soft cores (released at https://github.com/xmos as opposed to the old https://github.com/xcore) are made available under a non-free license. This license is nothing like BSD 3-clause license or any other Free Software or Open Source license. It restricts the license to use the code together with an XMOS product, requires the user to contribute fixes back to XMOS and contains references to importand export control. This license is incopatible with probably any FOSS license in existance, making it impossible to write FOSS code on XMOS while using any of the new soft cores released by XMOS.

    But even beyond that license change, not even all code is provided in source code format anymore. The new USB library (lib_usb) is provided as binary-only library, for example.

    If you know anyone at XMOS management or XMOS legal with whom I could raise this topic of license change when transitioning from older sc_* software to later lib_* code, I would appreciate this a lot.

    Proprietary Compiler

    While a lot of the toolchain and IDE is based on open source (Eclipse, LLVM, ...), the actual xC compiler is proprietary.

    by Harald Welte at September 01, 2017 10:00 PM

    Open Hardware Repository

    MasterFIP - Order of 130 masterFIP v4

    After having validated the design (through 15 v3) we are now ready to produce 130 v4 masterFIP boards!
    The plan is to install them in machines in LHC operation before the end of this year.

    by Evangelia Gousiou (Evangelia.Gousiou@cern.ch) at September 01, 2017 04:23 PM

    August 30, 2017

    Open Hardware Repository

    White Rabbit - 29-08-2017: Geodetic station connected with WR to UTC(MIKE)

    MIKES, the centre for metrology and accreditation of Finland, has connected the Metsähovi Geodetic Research Station to the official time of Finland, UTC[MIKE]

    Some quotes from the article Metsähovi connected to the official time of Finland:

    The time transfer to Metsähovi, Kirkkonummi, occurs from the UTC-laboratory at VTT MIKES Metrology in Otaniemi via optical fibre using the White Rabbit protocol. VTT MIKES Metrology has been an early adopter of the White Rabbit technology for time transfer across long distances. White Rabbit was developed at CERN, the European Organization for Nuclear Research.

    The measurements show, for example, how the travel time of light each way in a 50-kilometre fibre optic cable varies by approx. 7 nanoseconds within a 24-hour period as temperature changes affect the properties of the fibre optic cable, particularly its length.

    The uncertainty of time transfer is expected to be 100 ps or better. The precision of frequency transfer is currently approx. 15 digits.

    by Erik van der Bij (Erik.van.der.Bij@cern.ch) at August 30, 2017 09:41 AM

    August 29, 2017

    Free Electrons

    Free Electrons at the Linux Plumbers 2017 conference

    The Linux Plumbers conference has established itself as a major conference in the Linux ecosystem, discussing numerous aspects of the low-level layers of the Linux software stack. Linux Plumbers is organized around a number of micro-conferences, plus a number of more regular talks.

    Linux Plumbers 2017

    Free Electrons already participated to several previous editions of Linux Plumbers, and will again participate to this year’s edition that takes place in Los Angeles on September 13-15. Free Electrons engineers Boris Brezillon, Alexandre Belloni, Grégory Clement and Thomas Petazzoni will attend the conference.

    If you’re attending this conference, or are located in the Los Angeles area, and want to meet us, do not hesitate to drop us a line at info@free-electrons.com. You can also follow Free Electrons Twitter feed for updates during the conference.

    by Thomas Petazzoni at August 29, 2017 11:43 AM

    August 25, 2017

    Open Hardware Repository

    White Rabbit Switch - Software - WR Switch firmware v5.0.1 released

    Since v5.0 was released we have found a few problems in the WR Switch software package. This new v5.0.1 release does not include new functionality but contains important hotfixes to the v5.0. The FPGA bitstream used in v5.0.1 is exactly the same as in 5.0, therefore those same calibration values apply. As for any other release, you can find all the links to download the firmware binaries and manuals on our v5.0.1 release wiki page

    Main fixes include:
    • USB flashing which was broken in v5.0
    • PPSI pre-master state fix
    • make menuconfig fixes
    • SNMP fixes
    • Webinterface fixes
    For the full list of solved issues please check:

    We advise updating your v5.0 switches to include these latest fixes.

    Greg Daniluk, Adam Wujek

    by Grzegorz Daniluk (grzegorz.daniluk@cern.ch) at August 25, 2017 11:37 AM

    August 19, 2017

    Harald Welte

    Osmocom jenkins test suite execution

    Automatic Testing in Osmocom

    So far, in many Osmocom projects we have unit tests next to the code. Those unit tests are executing test on a per-C-function basis, and typically use the respective function directly from a small test program, executed at make check time. The actual main program (like OsmoBSC or OsmoBTS) is not executed at that time.

    We also have VTY testing, which specifically tests that the VTY has proper documentation for all nodes of all commands.

    Then there's a big gap, and we have osmo-gsm-tester for testing a full cellular network end-to-end. It includes physical GSM modesm, coaxial distribution network, attenuators, splitter/combiners, real BTS hardware and logic to run the full network, from OsmoBTS to the core - both for OsmoNITB and OsmoMSC+OsmoHLR based networks.

    However, I think a lot of testing falls somewhere in between, where you want to run the program-under-test (e.g. OsmoBSC), but you don't want to run the MS, BTS and MSC that normally surroudns it. You want to test it by emulating the BTS on the Abis sid and the MSC on the A side, and just test Abis and A interface transactions.

    For this kind of testing, I have recently started to investigate available options and tools.

    OsmoSTP (M3UA/SUA)

    Several months ago, during the development of OsmoSTP, I disovered that the Network Programming Lab of Münster University of Applied Sciences led by Michael Tuexen had released implementations of the ETSI test suite for the M3UA and SUA members of the SIGTRAN protocol family.

    The somewhat difficult part is that they are implemented in scheme, using the guile interpreter/compiler, as well as a C-language based execution wrapper, which then is again called by another guile wrapper script.

    I've reimplemented the test executor in python and added JUnitXML output to it. This means it can feed the test results directly into Jenkins.

    I've also cleaned up the Dockerfiles and related image generation for the osmo-stp-master, m3ua-test and sua-test images, as well as some scripts to actually execute them on one of the Builders. You can find related Dockerfiles as well as associtaed Makfiles in http://git.osmocom.org/docker-playground

    The end result after integration with Osmocom jenkins can be seen in the following examples on jenkins.osmocom.org for M3UA and for SUA

    Triggering the builds is currently periodic once per night, but we could of course also trigger them automatically at some later point.

    OpenGGSN (GTP)

    For OpenGGSN, during the development of IPv6 PDP context support, I wrote some test infrastructure and test cases in TTCN-3. Those test cases can be found at http://git.osmocom.org/osmo-ttcn3-hacks/tree/ggsn_tests

    I've also packaged the GGSN and the test cases each into separate Docker containers called osmo-ggsn-latest and ggsn-test. Related Dockerfiles and Makefiles can again be found in http://git.osmocom.org/docker-playground - together with a Eclipse TITAN Docker base image using Debian Stretch called debian-stretch-titan

    Using those TTCN-3 test cases with the TITAN JUnitXML logger plugin we can again integrate the results directly into Jenkins, whose results you can see at https://jenkins.osmocom.org/jenkins/view/TTCN3/job/ttcn3-ggsn-test/14/testReport/(root)/GGSN_Tests/

    Further Work

    I've built some infrastructure for Gb (NS/BSSGP), VirtualUm and other testing, but yet have to build Docker images and related jenkins integration for it. Stay tuned about that. Also, lots more actual tests cases are required. I'm very much looking forward to any contributions.

    by Harald Welte at August 19, 2017 10:00 PM

    August 18, 2017

    Open Hardware Repository

    1:8 Pulse/Frequency Distribution Amplifier - S/N005 phase-noise measurement at 10 MHz

    Phase-noise measurements show a flat spur-free phase-noise and AM-noise floor of -162 dBc/Hz at >100 Hz offset from a 10 MHz carrier. Measurements at 5 MHz to follow.
    See wiki for results.

    by Anders Wallin (anders.e.e.wallin@gmail.com) at August 18, 2017 06:38 AM

    August 16, 2017

    Free Electrons

    Updated bleeding edge toolchains on toolchains.free-electrons.com

    Two months ago, we announced a new service from Free Electrons: free and ready-to-use Linux cross-compilation toolchains, for a large number of architectures and C libraries, available at http://toolchains.free-electrons.com/.

    Bleeding edge toolchain updates

    All our bleeding edge toolchains have been updated, with the latest version of the toolchain components:

    • gcc 7.2.0, which was released 2 days ago
    • glibc 2.26, which was released 2 weeks ago
    • binutils 2.29
    • gdb 8.0

    Those bleeding edge toolchains are now based on Buildroot 2017.08-rc2, which brings a nice improvement: the host tools (gcc, binutils, etc.) are no longer linked statically against gmp, mpfr and other host libraries. They are dynamically linked against them with an appropriate rpath encoded into the gcc and binutils binaries to find those shared libraries regardless of the installation location of the toolchain.

    However, due to gdb 8.0 requiring a C++11 compiler on the host machine (at least gcc 4.8), our bleeding edge toolchains are now built in a Debian Jessie system instead of Debian Squeeze, which means that at least glibc 2.14 is needed on the host system to use them.

    The only toolchains for which the tests are not successful are the MIPS64R6 toolchains, due to the Linux kernel not building properly for this architecture with gcc 7.x. This issue has already been reported upstream.

    Stable toolchain updates

    We haven’t changed the component versions of our stable toolchains, but we made a number of fixes to them:

    • The armv7m and m68k-coldfire toolchains have been rebuilt with a fixed version of elf2flt that makes the toolchain linker directly usable. This fixes building the Linux kernel using those toolchains.
    • The mips32r5 toolchain has been rebuilt with NaN 2008 encoding (instead of NaN legacy), which makes the resulting userspace binaries actually executable by the Linux kernel, which expects NaN 2008 encoding on mips32r5 by default.
    • Most mips toolchains for musl have been rebuilt, with Buildroot fixes for the creation of the dynamic linker symbolic link. This has no effect on the toolchain itself, but also the tests under Qemu to work properly and validate the toolchains.

    Other improvements

    We made a number of small improvements to the toolchains.free-electrons.com site:

    • Each architecture now has a page that lists all toolchain versions available. This allows to easily find a toolchain that matches your requirements (in terms of gcc version, kernel headers version, etc.). See All aarch64 toolchains for an example.
    • We added a FAQ as well as a news page.

    As usual, we welcome feedback about our toolchains, either on our bug tracker or by mail at info@free-electrons.com.

    by Thomas Petazzoni at August 16, 2017 08:20 PM

    Open Hardware Repository

    1:8 Pulse/Frequency Distribution Amplifier - S/N005 assembled and tested

    Amplifier S/N 005 was assembled and tested, housing PDA2017.07 and FDA2017.07 boards.
    Initial phase-noise tests show good performance, similar to the previous generation of the board.
    The new PDA2017.07 design using IDT5PB1108 has very fast rise-time (to be measured), possibly a quite low output-impedance (to be fixed?) and a preliminary channel-to-channel output skew of max 250 ps.

    by Anders Wallin (anders.e.e.wallin@gmail.com) at August 16, 2017 01:29 PM

    Free Electrons

    Free Electrons proposes an I3C subsystem for the Linux kernel

    MIPI I3C fact sheet, from the MIPI I3C white paper

    MIPI I3C fact sheet, from the MIPI I3C white paper

    At the end of 2016, the MIPI consortium has finalized the first version of its I3C specification, a new communication bus that aims at replacing older busses like I2C or SPI. According to the specification, I3C gets closer to SPI data rate while requiring less pins and adding interesting mechanisms like in-band interrupts, hotplug capability or automatic discovery of devices connected on the bus. In addition, I3C provides backward compatibility with I2C: I3C and legacy I2C devices can be connected on a common bus controlled by an I3C master.

    For more details about I3C, we suggest reading the MIPI I3C Whitepaper, as unfortunately MIPI has not publicly released the specifications for this protocol.

    For the last few months, Free Electrons engineer Boris Brezillon has been working with Cadence to develop a Linux kernel subsystem to support this new bus, as well as Cadence’s I3C master controller IP. We have now posted the first version of our patch series to the Linux kernel mailing list for review, and we already received a large number of very useful comments from the kernel community.

    Free Electrons is proud to be pioneering the support for this new bus in the Linux kernel, and hopes to see other developers contribute to this subsystem in the near future!

    by Boris Brezillon at August 16, 2017 01:10 PM

    August 14, 2017

    Bunnie Studios

    Name that Ware, August 2017

    The Ware for August 2017 is below.

    I removed a bit of context to make it more difficult — if it proves unguessable I’ll zoom out slightly (or perhaps just leave one extra, crucial hint to consider).

    by bunnie at August 14, 2017 04:23 PM

    Winner, Name that Ware July 2017

    The ware for July 2017 is a PMT (photomultiplier tube) module. I’d say wrm gets the prize this month, for getting that it’s a PMT driver first, and for linking to a schematic. :) That’s an easy way to win me over. Gratz, email me to claim your prize!

    by bunnie at August 14, 2017 04:22 PM

    August 08, 2017

    Harald Welte

    IPv6 User Plane support in Osmocom

    Preface

    Cellular systems ever since GPRS are using a tunnel based architecture to provide IP connectivity to cellular terminals such as phones, modems, M2M/IoT devices and the like. The MS/UE establishes a PDP context between itself and the GGSN on the other end of the cellular network. The GGSN then is the first IP-level router, and the entire cellular network is abstracted away from the User-IP point of view.

    This architecture didn't change with EGPRS, and not with UMTS, HSxPA and even survived conceptually in LTE/4G.

    While the concept of a PDP context / tunnel exists to de-couple the transport layer from the structure and type of data inside the tunneled data, the primary user plane so far has been IPv4.

    In Osmocom, we made sure that there are no impairments / assumptions about the contents of the tunnel, so OsmoPCU and OsmoSGSN do not care at all what bits and bytes are transmitted in the tunnel.

    The only Osmocom component dealing with the type of tunnel and its payload structure is OpenGGSN. The GGSN must allocate the address/prefix assigned to each individual MS/UE, perform routing between the external IP network and the cellular network and hence is at the heart of this. Sadly, OpenGGSN was an abandoned project for many years until Osmocom adopted it, and it only implemented IPv4.

    This is actually a big surprise to me. Many of the users of the Osmocom stack are from the IT security area. They use the Osmocom stack to test mobile phones for vulnerabilities, analyze mobile malware and the like. As any penetration tester should be interested in analyzing all of the attack surface exposed by a given device-under-test, I would have assumed that testing just on IPv4 would be insufficient and over the past 9 years, somebody should have come around and implemented the missing bits for IPv6 so they can test on IPv6, too.

    In reality, it seems nobody appears to have shared line of thinking and invested a bit of time in growing the tools used. Or if they did, they didn't share the related code.

    In June 2017, Gerrie Roos submitted a patch for OpenGGSN IPv6 support that raised hopes about soon being able to close that gap. However, at closer sight it turns out that the code was written against a more than 7 years old version of OpenGGSN, and it seems to primarily focus on IPv6 on the outer (transport) layer, rather than on the inner (user) layer.

    OpenGGSN IPv6 PDP Context Support

    So in July 2017, I started to work on IPv6 PDP support in OpenGGSN.

    Initially I thought How hard can it be? It's not like IPv6 is new to me (I joined 6bone under 3ffe prefixes back in the 1990ies and worked on IPv6 support in ip6tables ages ago. And aside from allocating/matching longer addresses, what kind of complexity does one expect?

    After my initial attempt of implementation, partially mislead by the patch that was contributed against that 2010-or-older version of OpenGGSN, I'm surprised how wrong I was.

    In IPv4 PDP contexts, the process of establishing a PDP context is simple:

    • Request establishment of a PDP context, set the type to IETF IPv4
    • Receive an allocated IPv4 End User Address
    • Optionally use IPCP (part of PPP) to reques and receive DNS Server IP addresses

    So I implemented the identical approach for IPv6. Maintain a pool of IPv6 addresses, allocate one, and use IPCP for DNS. And nothing worked.

    • IPv6 PDP contexts assign a /64 prefix, not a single address or a smaller prefix
    • The End User Address that's part of the Signalling plane of Layer 3 Session Management and GTP is not the actual address, but just serves to generate the interface identifier portion of a link-local IPv6 address
    • IPv6 stateless autoconfiguration is used with this link-local IPv6 address inside the User Plane, after the control plane signaling to establish the PDP context has completed. This means the GGSN needs to parse ICMPv6 router solicitations and generate ICMPV6 router advertisements.

    To make things worse, the stateless autoconfiguration is modified in some subtle ways to make it different from the normal SLAAC used on Ethernet and other media:

    • the timers / lifetimes are different
    • only one prefix is permitted
    • only a prefix length of 64 is permitted

    A few days later I implemented all of that, but it still didn't work. The problem was with DNS server adresses. In IPv4, the 3GPP protocols simply tunnel IPCP frames for this. This makes a lot of sense, as IPCP is designed for point-to-point interfaces, and this is exactly what a PDP context is.

    In IPv6, the corresponding IP6CP protocol does not have the capability to provision DNS server addresses to a PPP client. WTF? The IETF seriously requires implementations to do DHCPv6 over PPP, after establishing a point-to-point connection, only to get DNS server information?!? Some people suggested an IETF draft to change this butthe draft has expired in 2011 and we're still stuck.

    While 3GPP permits the use of DHCPv6 in some scenarios, support in phones/modems for it is not mandatory. Rather, the 3GPP has come up with their own mechanism on how to communicate DNS server IPv6 addresses during PDP context activation: The use of containers as part of the PCO Information Element used in L3-SM and GTP (see Section 10.5.6.3 of 3GPP TS 24.008. They by the way also specified the same mechanism for IPv4, so there's now two competing methods on how to provision IPv4 DNS server information: IPCP and the new method.

    In any case, after some more hacking, OpenGGSN can now also provide DNS server information to the MS/UE. And once that was implemented, I had actual live uesr IPv6 data over a full Osmocom cellular stack!

    Summary

    We now have working IPv6 User IP in OpenGGSN. Together with the rest of the Osmocom stack you can operate a private GPRS, EGPRS, UMTS or HSPA network that provide end-to-end transparent, routed IPv6 connectivity to mobile devices.

    All in all, it took much longer than nneeded, and the following questions remain in my mind:

    • why did the IETF not specify IP6CP capabilities to configure DNS servers?
    • why the complex two-stage address configuration with PDP EUA allocation for the link-local address first and then stateless autoconfiguration?
    • why don't we simply allocate the entire prefix via the End User Address information element on the signaling plane? For sure next to the 16byte address we could have put one byte for prefix-length?
    • why do I see duplication detection flavour neighbour solicitations from Qualcomm based phones on what is a point-to-point link with exactly two devices: The UE and the GGSN?
    • why do I see link-layer source address options inside the ICMPv6 neighbor and router solicitation from mobile phones, when that option is specifically not to be used on point-to-point links?
    • why is the smallest prefix that can be allocated a /64? That's such a waste for a point-to-point link with a single device on the other end, and in times of billions of connected IoT devices it will just encourage the use of non-public IPv6 space (i.e. SNAT/MASQUERADING) while wasting large parts of the address space

    Some of those choices would have made sense if one would have made it fully compatible with normal IPv6 like e.g. on Ethernet. But implementing ICMPv6 router and neighbor solicitation without getting any benefit such as ability to have multiple prefixes, prefixes of different lengths, I just don't understand why anyone ever thought You can find the code at http://git.osmocom.org/openggsn/log/?h=laforge/ipv6 and the related ticket at https://osmocom.org/issues/2418

    by Harald Welte at August 08, 2017 10:00 PM

    July 31, 2017

    Open Hardware Repository

    White Rabbit - 31-07-2017: WR Switch Production Test Suite published

    For the past few months we were working with INCAA Computers BV on a new WR Switch Production Test Suite. This
    system allows to verify during the production or after delivery that all the components of the WR Switch hardware work properly.
    Please check the WRS PTS wiki page for all the sources and documentation.

    by Grzegorz Daniluk (grzegorz.daniluk@cern.ch) at July 31, 2017 06:11 PM

    July 27, 2017

    Bunnie Studios

    Name that Ware July 2017

    The Ware for July 2017 is shown below.

    Decided to do this one with the potting on to make it a smidgen more challenging.

    by bunnie at July 27, 2017 04:17 AM

    Winner, Name that Ware June 2017

    The Ware for June 2017 is an ultrasonic delay line. Picked this beauty up while wandering the junk shops of Akihabara. There’s something elegant about the Old Ways that’s simply irresistible to me…back when the answer to all hard problems was not simply “transform it into the software domain and then compute the snot out of it”.

    Grats to plum33 for nailing it! email me for your prize.

    by bunnie at July 27, 2017 04:17 AM

    July 19, 2017

    Video Circuits

    Photos From The Video Workshop

    The video workshop Alex and I gave was one of the best I have delivered, all 15 attendees got to take home a working CHA/V module they built in the class, it's a hacked VGA signal generator that basically allows you to build a simple video synth by adding some home brew or off the shelf oscillators. We had a great mix of attendees and they were all from really interesting backgrounds and super engaged. Alex as usual did a nicely paced video synthesis tutorial and I then lead the theory and building part of the class. We rounded up with Alex leading a discussion around historical video synthesis work and then proceeded to enjoy the evening concerts that were also part of the fantastic Brighton modular meet. (Pics 3+9 here are from Fabrizio D'Amico who runs Video Hack Space) Thanks to Andrew for organising the amazing meet which hosted the workshop, Matt for making our panels last minute, George for helping us out on the day and Steve from Thonk for supplying some components for the kits.










    by Chris (noreply@blogger.com) at July 19, 2017 02:39 AM

    July 18, 2017

    Harald Welte

    Virtual Um interface between OsmoBTS and OsmocomBB

    During the last couple of days, I've been working on completing, cleaning up and merging a Virtual Um interface (i.e. virtual radio layer) between OsmoBTS and OsmocomBB. After I started with the implementation and left it in an early stage in January 2016, Sebastian Stumpf has been completing it around early 2017, with now some subsequent fixes and improvements by me. The combined result allows us to run a complete GSM network with 1-N BTSs and 1-M MSs without any actual radio hardware, which is of course excellent for all kinds of testing scenarios.

    The Virtual Um layer is based on sending L2 frames (blocks) encapsulated via GSMTAP UDP multicast packets. There are two separate multicast groups, one for uplink and one for downlink. The multicast nature simulates the shared medium and enables any simulated phone to receive the signal from multiple BTSs via the downlink multicast group.

    /images/osmocom-virtum.png

    In OsmoBTS, this is implemented via the new osmo-bts-virtual BTS model.

    In OsmocomBB, this is realized by adding virtphy virtual L1, which speaks the same L1CTL protocol that is used between the real OsmcoomBB Layer1 and the Layer2/3 programs such as mobile and the like.

    Now many people would argue that GSM without the radio and actual handsets is no fun. I tend to agree, as I'm a hardware person at heart and I am not a big fan of simulation.

    Nevertheless, this forms the basis of all kinds of possibilities for automatized (regression) testing in a way and for layers/interfaces that osmo-gsm-tester cannot cover as it uses a black-box proprietary mobile phone (modem). It is also pretty useful if you're traveling a lot and don't want to carry around a BTS and phones all the time, or get some development done in airplanes or other places where operating a radio transmitter is not really a (viable) option.

    If you're curious and want to give it a shot, I've put together some setup instructions at the Virtual Um page of the Osmocom Wiki.

    by Harald Welte at July 18, 2017 10:00 PM

    July 15, 2017

    Bunnie Studios

    That’s a Big Microscope…

    I’ve often said that there are no secrets in hardware — you just need a bigger, better microscope.

    I think I’ve found the limit to that statement. To give you an idea, here’s the “lightbulb” that powers the microscope:

    It’s the size of a building, and it’s the Swiss Light Source. Actually, not all of that building is dedicated to this microscope, just one beamline of an X-ray synchrotron capable of producing photons at an energy of 6.5keV (X-rays) at a flux of close to a billion coherent photons per second — but still, it’s a big light bulb. It might be a while before you see one of these popping up in a hacker’s garage…err, hangar…somewhere.

    The result? One can image, in 3-D and “non-destructively” (e.g., without having to delayer or etch away dielectrics), chips down to a resolution of 14.6nm.

    That’s a pretty neat trick if you’re trying to reverse engineer modern silicon.

    You can read the full article at Nature (“High Resolution non-destructive three-dimensional imaging of integrated circuits” by Mirko Holler et al). I’m a paying subscriber to Nature so I’m supposed to have access to the article, but at the moment, their paywall is throwing a null pointer exception. Once the paywall is fixed you can buy a copy of the article to read, but in the meantime, SciHub seems more reliable.

    You get what you pay for, right?

    by bunnie at July 15, 2017 01:55 PM

    July 11, 2017

    Elphel

    Current video stream latency and a way to reduce it

    Fig.1 Live stream latency testing

    Recently we had an inquiry whether our cameras are capable of streaming low latency video. The short answer is yes, the camera’s average output latency for 1080p at 30 fps is ~16 ms. It is possible to reduce it to almost 0.5 ms with a few changes to the driver.

    However the total latency of the system, from capturing to displaying, includes delays caused by network, pc, software and display.

    In the results of the experiment (similar to this one) these delays contribute the most (around 40-50 ms) to the stream latency – at least, for the given equipment.


     

    Goal

    Measure the total latency of a live stream over network from 10393 camera.
     

    Setup

    • Camera: NC393-F-CS
      • Resolution@fps: 1080p@30fps,  720p@60fps
      • Compression quality: 90%
      • Exposure time: 1.7 ms
      • Stream formats: mjpeg, rtsp
      • Sensor: MT9P001, 5MPx, 1/2.5″
      • Lens: Computar f=5mm, f/1.4, 1/2″
    • PC: Shuttle box, i7, 16GB RAM, GeForce GTX 560 Ti
    • Display: ASUS VS24A, 60Hz (=16.7ms), 5ms gtg
    • OS: Kubuntu 16.04
    • Network connection: 1Gbps, direct camera-PC via cable
    • Applications:
      • gstreamer
      • chrome, firefox
      • mplayer
      • vlc
    • Stopwatch: basic javascript

     

    Notes

    table{ border-collapse: collapse; } td{ padding:0px 5px; border:1px solid black; } th{ padding:5px; border:1px solid black; background:rgba(220,220,220,0.5); }


    Table 1: Transfer times and data rate

    Resolution/fps Image size1, KB Transfer time2, ms Data rate3, Mbps
    720p/60 250 2 120
    1080p/30 500 4 120

    1 – average compressed (90%) image size
    2 – time it takes to transfer a single image over network. Jitter is unknown. t = Image_size*1Gbps
    3 – required bandwidth: rate = fps*Image_size

    Camera output latency calculation

    All numbers are for the given lens, sensor and camera setup and parameters. Briefly.

    Sensor
    Because of ERS each row’s latency is different. See tables 2 and 3.
     
    Table 2: tROW and tTR

    Resolution tROW1, us tTR2, us
    720p 22.75 13.33
    1080p 29.42 20
    full res (2592×1936) 36.38 27

    1 – row time, see datasheet. tROW = f(Width)
    2 – time it takes to transfer a row over sensor cable, clock = 96MHz. tTR = Width/96MHz
     
    Table 3: Average latency and the whole range.

    Resolution tERS avg1, ms tERS whole range2, ms
    720p 8 0.01-16
    1080p 16 0.02-32

    1 – average latency
    2 – min – last row latency, max – 1st row latency

    Exposure

    tEXP < 1 ms – typical exposure time for outdoors. A display is bright enough to set 1.7 ms with the gains maxed.

    Compressor

    The compressor is implemented in fpga and works 3x times faster but needs a stripe of 20 rows in memory. Thus, the compressor will finish ~20/3*tROW after the whole image is read out.

    tCMP = 20/3*tROW

    Summary

    tCAM = tERS + tEXP + tCMP

    Since the image is read and compressed by fpga logic of the Zynq and this pipeline has been simulated we can be sure in these numbers.
     
    Table 4: Average output latency + exposure

    Resolution tCAM, ms
    720p 9.9
    1080p 17.9

    Stopwatch accuracy

    Not accurate. For simplicity, we will rely on the camera’s internal clock that time stamps every image, and take the javascript timer readings as unique labels, thus not caring what time they are showing.
     

    Results

    Fig.2 1080p 30fps

    Fig.3 720p 60fps

     
    GStreamer has shown the best results among the tested programs.
    Since the camera fps is discrete the result is a multiple of 1/fps (see this article):

    • 30 fps => 33.3 ms
    • 60 fps => 16.7 ms

     

    Resolution/fps Total Latency, ms Network+PC+SW latency, ms
    720p@60fps 33.3-50 23.4-40.1
    1080p@30fps 33.3-66.7 15.4-48.8

     

    Possible improvements

    Camera

    Currently, the driver waits for the interrupt from the compressor that indicates the image is fully compressed and ready for transfer. Meanwhile one does not have to wait for the whole image but start the transfer when the minimum of the compressed is data ready.

    There are 3 more interrupts related to the image pipeline events. One of them is “compression started” – switching to it can reduce the output latency to (10+20/3)*tROW or 0.4 ms for 720p and 0.5 ms for 1080p.

    Other hardware and software

    In addition to the most obvious improvements:

    • For wifi: use 5GHz over 2.4GHz – smaller jitter, non-overlapping channels
    • Lower latency software: for mjpeg use gstreamer or vlc (takes an extra effort to setup) over chrome or firefox because they do extra buffering

     

    Links

     

    Updates

     
    Table 6: Camera ports

    mjpeg rtsp
    port 0 2323 554
    port 1 2324 556
    port 2 2325 558
    port 3 2326 560

    GStreamer pipelines

    • For mjpeg:

    ~$ gst-launch-1.0 souphttpsrc is-live=true location=http://192.168.0.9:2323/mimg ! jpegdec ! xvimagesink

    • For rtsp:

    ~$ gst-launch-1.0 rtspsrc is-live=true location=rtsp://192.168.0.9:554 ! rtpjpegdepay ! jpegdec ! xvimagesink

    VLC

    ~$ vlc rtsp://192.168.0.9:554

    Chrome/Firefox

    Open http://192.168.0.9:2323/mimg

    by Oleg Dzhimiev at July 11, 2017 05:33 PM

    July 10, 2017

    Free Electrons

    Linux 4.12, Free Electrons contributions

    Linus Torvalds has released the 4.12 Linux kernel a week ago, in what is the second biggest kernel release ever by number of commits. As usual, LWN had a very nice coverage of the major new features and improvements: first part, second part and third part.

    LWN has also published statistics about the Linux 4.12 development cycles, showing:

    • Free Electrons as the #14 contributing company by number of commits, with 221 commits, between Broadcom (230 commits) and NXP (212 commits)
    • Free Electrons as the #14 contributing company number of changed lines, with 16636 lines changed, just two lines less than Mellanox
    • Free Electrons engineer and MTD NAND maintainer Boris Brezillon as the #17 most active contributor by number of lines changed.

    Our most important contributions to this kernel release have been:

    • On Atmel AT91 and SAMA5 platforms:
      • Alexandre Belloni has continued to upstream the support for the SAMA5D2 backup mode, which is a very deep suspend to RAM state, offering very nice power savings. Alexandre touched the core code in arch/arm/mach-at91 as well as pinctrl and irqchip drivers
      • Boris Brezillon has converted the Atmel PWM driver to the atomic API of the PWM subsystem, implemented suspend/resume and did a number of fixes in the Atmel display controller driver, and also removed the no longer used AT91 Parallel ATA driver.
      • Quentin Schulz improved the suspend/resume hooks in the atmel-spi driver to support the SAMA5D2 backup mode.
    • On Allwinner platforms:
      • Mylène Josserand has made a number of improvements to the sun8i-codec audio driver that she contributed a few releases ago.
      • Maxime Ripard added devfreq support to dynamically change the frequency of the GPU on the Allwinner A33 SoC.
      • Quentin Schulz added battery charging and ADC support to the X-Powers AXP20x and AXP22x PMICs, found on Allwinner platforms.
      • Quentin Schulz added a new IIO driver to support the ADCs found on numerous Allwinner SoCs.
      • Quentin Schulz added support for the Allwinner A33 built-in thermal sensor, and used it to implement thermal throttling on this platform.
    • On Marvell platforms:
      • Antoine Ténart contributed Device Tree changes to describe the cryptographic engines found in the Marvell Armada 7K and 8K SoCs. For now only the Device Tree description has been merged, the driver itself will arrive in Linux 4.13.
      • Grégory Clement has contributed a pinctrl and GPIO driver for the Marvell Armada 3720 SoC (Cortex-A53 based)
      • Grégory Clement has improved the Device Tree description of the Marvell Armada 3720 and Marvell Armada 7K/8K SoCs and corresponding evaluation boards: SDHCI and RTC are now enabled on Armada 7K/8K, USB2, USB3 and RTC are now enabled on Armada 3720.
      • Thomas Petazzoni made a significant number of changes to the mvpp2 network driver, finally adding support for the PPv2.2 version of this Ethernet controller. This allowed to enable network support on the Marvell Armada 7K/8K SoCs.
      • Thomas Petazzoni contributed a number of fixes to the mv_xor_v2 dmaengine driver, used for the XOR engines on the Marvell Armada 7K/8K SoCs.
      • Thomas Petazzoni cleaned-up the MSI support in the Marvell pci-mvebu and pcie-aardvark PCI host controller drivers, which allowed to remove a no-longer used MSI kernel API.
    • On the ST SPEAr600 platform:
      • Thomas Petazzoni added support for the ADC available on this platform, by adding its Device Tree description and fixing a clock driver bug
      • Thomas did a number of small improvements to the Device Tree description of the SoC and its evaluation board
      • Thomas cleaned up the fsmc_nand driver, which is used for the NAND controller driver on this platform, removing lots of unused code
    • In the MTD NAND subsystem:
      • Boris Brezillon implemented a mechanism to allow vendor-specific initialization and detection steps to be added, on a per-NAND chip basis. As part of this effort, he has split into multiple files the vendor-specific initialization sequences for Macronix, AMD/Spansion, Micron, Toshiba, Hynix and Samsung NANDs. This work will allow in the future to more easily exploit the vendor-specific features of different NAND chips.
    • Other contributions:
      • Maxime Ripard added a display panel driver for the ST7789V LCD controller

    In addition, several Free Electrons engineers are also maintainers of various kernel subsystems. During this release cycle, they reviewed and merged a number of patches from kernel contributors:

    • Maxime Ripard, as the Allwinner co-maintainer, merged 94 patches
    • Boris Brezillon, as the NAND maintainer and MTD co-maintainer, merged 64 patches
    • Alexandre Belloni, as the RTC maintainer and Atmel co-maintainer, merged 38 patches
    • Grégory Clement, as the Marvell EBU co-maintainer, merged 32 patches

    The details of all our contributions for this release:

    by Thomas Petazzoni at July 10, 2017 10:13 AM

    July 09, 2017

    Harald Welte

    Ten years after first shipping Openmoko Neo1973

    Exactly 10 years ago, on July 9th, 2007 we started to sell+ship the first Openmoko Neo1973. To be more precise, the webshop actually opened a few hours early, depending on your time zone. Sean announced the availability in this mailing list post

    I don't really have to add much to my ten years [of starting to work on] Openmoko anniversary blog post a year ago, but still thought it's worth while to point out the tenth anniversary.

    It was exciting times, and there was a lot of pioneering spirit: Building a Linux based smartphone with a 100% FOSS software stack on the application processor, including all drivers, userland, applications - at a time before Android was known or announced. As history shows, we'd been working in parallel with Apple on the iPhone, and Google on Android. Of course there's little chance that a small taiwanese company can compete with the endless resources of the big industry giants, and the many Neo1973 delays meant we had missed the window of opportunity to be the first on the market.

    It's sad that Openmoko (or similar projects) have not survived even as a special-interest project for FOSS enthusiasts. Today, virtually all options of smartphones are encumbered with way more proprietary blobs than we could ever imagine back then.

    In any case, the tenth anniversary of trying to change the amount of Free Softwware in the smartphone world is worth some celebration. I'm reaching out to old friends and colleagues, and I guess we'll have somewhat of a celebration party both in Germany and in Taiwan (where I'll be for my holidays from mid-September to mid-October).

    by Harald Welte at July 09, 2017 02:00 PM

    July 07, 2017

    Open Hardware Repository

    White Rabbit core collection - White Rabbit PTP Core v4.1 released

    We have just released v4.1 of the WR PTP Core. You can find all the links to download the reference designs binaries and documentation on our release wiki page.

    This release contains mainly fixes to the previous v4.0 stable release:
    • fixed PCIe reset for standalone operation
    • fixes to p2p mode in PPSi
    • fixed Rx termination scheme for Spartan6 PHY which made WRPC unable to work with some SFPs
    • fixes and updates to HDL board and platform wrappers
    and also some new features like:
    • new Wishbone registers bank available to read WRPC diagnostics from user application
    • new wrpc-diags host tool to read diagnostics over PCIe or VME
    • built-in default init script that loads SFP calibration parameters and configures WRPC in Slave mode
    • new document WRPC Failures and Diagnostics

    Thank you for all the bug reports and contributions. As always, we encourage you to try this fresh release on your boards.

    Greg Daniluk for the WR PTP Core team

    by Grzegorz Daniluk (grzegorz.daniluk@cern.ch) at July 07, 2017 02:21 PM

    July 03, 2017

    Open Hardware Repository

    OHR Meta Project - 29-06-2017: Open Doors for Universal Embedded Design

    The article Open Doors for Universal Embedded Design in Embedded Systems Engineering, written by Caroline Hayes, Senior Editor, reads:

    Charged with finding cost-effective integration for multicore platforms, the European Union’s (EU) Artemis EMC2 project finished at the end of May this year. A further initiative with CERN could mean the spirit of co-operation and the principles of open hardware herald an era of innovation.

    and

    This collaboration is a new initiative. The PC/104 Consortium will provide design-in examples of new and mature boards, with a reference design, for others to use and create something new. Although the Sundance board is the only [PC/104] product on the CERN Open Hardware Repository, there will be more news in the summer, promises Christensen. “My goal is to get five designs within the first year,” he says, and he is actively working to promote to PC/104 Consortium members that there is a place where they can download—and upload—reference designs which are PC/104-compatible.

    Read the full article.

    by Erik van der Bij (Erik.van.der.Bij@cern.ch) at July 03, 2017 02:14 PM

    June 26, 2017

    Bunnie Studios

    Name that Ware June 2017

    The Ware for June 2017 is shown below.

    If nobody can guess this one from just the pointy end of the stick, I’ll post a photo with more context…

    by bunnie at June 26, 2017 08:06 PM

    Winner, Name that Ware May 2017

    The Ware for May 2017 is the “Lorentz and Hertz” carriage board from an HP Officejet Pro 8500. Congrats to MegabytePhreak for nailing both the make and model of the printer it came from! email me for your prize.

    I found the name of the board to be endearing.

    by bunnie at June 26, 2017 08:06 PM

    June 19, 2017

    Free Electrons

    Free and ready-to-use cross-compilation toolchains

    For all embedded Linux developers, cross-compilation toolchains are part of the basic tool set, as they allow to build code for a specific CPU architecture and debug it. Until a few years ago, CodeSourcery was providing a lot of high quality pre-compiled toolchains for a wide range of architectures, but has progressively stopped doing so. Linaro provides some freely available toolchains, but only targetting ARM and AArch64. kernel.org has a set of pre-built toolchains for a wider range of architectures, but they are bare metal toolchains (cannot build Linux userspace programs) and updated infrequently.

    To fill in this gap, Free Electrons is happy to announce its new service to the embedded Linux community: toolchains.free-electrons.com.

    Free Electrons toolchains

    This web site provides a large number of cross-compilation toolchains, available for a wide range of architectures, in multiple variants. The toolchains are based on the classical combination of gcc, binutils and gdb, plus a C library. We currently provide a total of 138 toolchains, covering many combinations of:

    • Architectures: AArch64 (little and big endian), ARC, ARM (little and big endian, ARMv5, ARMv6, ARMv7), Blackfin, m68k (Coldfire and 68k), Microblaze (little and big endian), MIPS32 and MIPS64 (little and big endian, with various instruction set variants), NIOS2, OpenRISC, PowerPC and PowerPC64, SuperH, Sparc and Sparc64, x86 and x86-64, Xtensa
    • C libraries: GNU C library, uClibc-ng and musl
    • Versions: for each combination, we provide a stable version which uses slightly older but more proven versions of gcc, binutils and gdb, and we provide a bleeding edge version with the latest version of gcc, binutils and gdb.

    After being generated, most of the toolchains are tested by building a Linux kernel and a Linux userspace, and booting it under Qemu, which allows to verify that the toolchain is minimally working. We plan on adding more tests to validate the toolchains, and welcome your feedback on this topic. Of course, not all toolchains are tested this way, because some CPU architectures are not emulated by Qemu.

    The toolchains are built with Buildroot, but can be used for any purpose: build a Linux kernel or bootloader, as a pre-built toolchain for your favorite embedded Linux build system, etc. The toolchains are available in tarballs, together with licensing information and instructions on how to rebuild the toolchain if needed.

    We are very much interested in your feedback about those toolchains, so do not hesitate to report bugs or make suggestions in our issue tracker!

    This work was done as part of the internship of Florent Jacquet at Free Electrons.

    by Thomas Petazzoni at June 19, 2017 07:52 AM

    June 15, 2017

    Harald Welte

    How the Osmocom GSM stack is funded

    As the topic has been raised on twitter, I thought I might share a bit of insight into the funding of the Osmocom Cellular Infrastructure Projects.

    Keep in mind: Osmocom is a much larger umbrella project, and beyond the Networks-side cellular stack is home many different community-based projects around open source mobile communications. All of those have started more or less as just for fun projects, nothing serious, just a hobby [1]

    The projects implementing the network-side protocol stacks and network elements of GSM/GPRS/EGPRS/UMTS cellular networks are somewhat the exception to that, as they have evolved to some extent professionalized. We call those projects collectively the Cellular Infrastructure projects inside Osmocom. This post is about that part of Osmocom only

    History

    From late 2008 through 2009, People like Holger and I were working on bs11-abis and later OpenBSC only in our spare time. The name Osmocom didn't even exist back then. There was a strong technical community with contributions from Sylvain Munaut, Andreas Eversberg, Daniel Willmann, Jan Luebbe and a few others. None of this would have been possible if it wasn't for all the help we got from Dieter Spaar with the BS-11 [2]. We all had our dayjob in other places, and OpenBSC work was really just a hobby. People were working on it, because it was where no FOSS hacker has gone before. It was cool. It was a big and pleasant challenge to enter the closed telecom space as pure autodidacts.

    Holger and I were doing freelance contract development work on Open Source projects for many years before. I was mostly doing Linux related contracting, while Holger has been active in all kinds of areas throughout the FOSS software stack.

    In 2010, Holger and I saw some first interest by companies into OpenBSC, including Netzing AG and On-Waves ehf. So we were able to spend at least some of our paid time on OpenBSC/Osmocom related contract work, and were thus able to do less other work. We also continued to spend tons of spare time in bringing Osmocom forward. Also, the amount of contract work we did was only a fraction of the many more hours of spare time.

    In 2011, Holger and I decided to start the company sysmocom in order to generate more funding for the Osmocom GSM projects by means of financing software development by product sales. So rather than doing freelance work for companies who bought their BTS hardware from other places (and spent huge amounts of cash on that), we decided that we wanted to be a full solution supplier, who can offer a complete product based on all hardware and software required to run small GSM networks.

    The only problem is: We still needed an actual BTS for that. Through some reverse engineering of existing products we figured out who one of the ODM suppliers for the hardware + PHY layer was, and decided to develop the OsmoBTS software to do so. We inherited some of the early code from work done by Andreas Eversberg on the jolly/bts branch of OsmocomBB (thanks), but much was missing at the time.

    What follows was Holger and me working several years for free [3], without any salary, in order to complete the OsmoBTS software, build an embedded Linux distribution around it based on OE/poky, write documentation, etc. and complete the first sysmocom product: The sysmoBTS 1002

    We did that not because we want to get rich, or because we want to run a business. We did it simply because we saw an opportunity to generate funding for the Osmocom projects and make them more sustainable and successful. And because we believe there is a big, gaping, huge vacuum in terms of absence of FOSS in the cellular telecom sphere.

    Funding by means of sysmocom product sales

    Once we started to sell the sysmoBTS products, we were able to fund Osmocom related development from the profits made on hardware / full-system product sales. Every single unit sold made a big contribution towards funding both the maintenance as well as the ongoing development on new features.

    This source of funding continues to be an important factor today.

    Funding by means of R&D contracts

    The probably best and most welcome method of funding Osmocom related work is by means of R&D projects in which a customer funds our work to extend the Osmocom GSM stack in one particular area where he has a particular need that the existing code cannot fulfill yet.

    This kind of project is the ideal match, as it shows where the true strength of FOSS is: Each of those customers did not have to fund the development of a GSM stack from scratch. Rather, they only had to fund those bits that were missing for their particular application.

    Our reference for this is and has been On-Waves, who have been funding development of their required features (and bug fixing etc.) since 2010.

    We've of course had many other projects from a variety of customers over over the years. Last, but not least, we had a customer who willingly co-funded (together with funds from NLnet foundation and lots of unpaid effort by sysmocom) the 3G/3.5G support in the Osmocom stack.

    The problem here is:

    • we have not been able to secure anywhere nearly as many of those R&D projects within the cellular industry, despite believing we have a very good foundation upon which we can built. I've been writing many exciting technical project proposals
    • you almost exclusively get funding only for new features. But it's very hard to get funding for the core maintenance work. The bug-fixing, code review, code refactoring, testing, etc.

    So as a result, the profit margin you have on selling R&D projects is basically used to (do a bad job of) fund those bits and pieces that nobody wants to pay for.

    Funding by means of customer support

    There is a way to generate funding for development by providing support services. We've had some success with this, but primarily alongside the actual hardware/system sales - not so much in terms of pure software-only support.

    Also, providing support services from a R&D company means:

    • either you distract your developers by handling support inquiries. This means they will have less time to work on actual code, and likely get side tracked by too many issues that make it hard to focus
    • or you have to hire separate support staff. This of course means that the size of the support business has to be sufficiently large to not only cover the cots of hiring + training support staff, but also still generate funding for the actual software R&D.

    We've tried shortly with the second option, but fallen back to the first for now. There's simply not sufficient user/admin type support business to rectify dedicated staff for that.

    Funding by means of cross-subsizing from other business areas

    sysmocom also started to do some non-Osmocom projects in order to generate revenue that we can feed again into Osmocom projects. I'm not at liberty to discuss them in detail, but basically we've been doing pretty much anything from

    • custom embedded Linux board designs
    • M2M devices with GSM modems
    • consulting gigs
    • public tendered research projects

    Profits from all those areas went again into Osmocom development.

    Last, but not least, we also operate the sysmocom webshop. The profit we make on those products also is again immediately re-invested into Osmocom development.

    Funding by grants

    We've had some success in securing funding from NLnet Foundation for specific features. While this is useful, the size of their projects grants of up to EUR 30k is not a good fit for the scale of the tasks we have at hand inside Osmocom. You may think that's a considerable amount of money? Well, that translates to 2-3 man-months of work at a bare cost-covering rate. At a team size of 6 developers, you would theoretically have churned through that in two weeks. Also, their focus is (understandably) on Internet and IT security, and not so much cellular communications.

    There are of course other options for grants, such as government research grants and the like. However, they require long-term planning, they require you to match (i.e. pay yourself) a significant portion, and basically mandate that you hire one extra person for doing all the required paperwork and reporting. So all in all, not a particularly attractive option for a very small company consisting of die hard engineers.

    Funding by more BTS ports

    At sysmocom, we've been doing some ports of the OsmoBTS + OsmoPCU software to other hardware, and supporting those other BTS vendors with porting, R&D and support services.

    If sysmocom was a classic BTS vendor, we would not help our "competition". However, we are not. sysmocom exists to help Osmocom, and we strongly believe in open systems and architectures, without a single point of failure, a single supplier for any component or any type of vendor lock-in.

    So we happily help third parties to get Osmocom running on their hardware, either with a proprietary PHY or with OsmoTRX.

    However, we expect that those BTS vendors also understand their responsibility to share the development and maintenance effort of the stack. Preferably by dedicating some of their own staff to work in the Osmocom community. Alternatively, sysmocom can perform that work as paid service. But that's a double-edged sword: We don't want to be a single point of failure.

    Osmocom funding outside of sysmocom

    Osmocom is of course more than sysmocom. Even for the cellular infrastructure projects inside Osmocom is true: They are true, community-based, open, collaborative development projects. Anyone can contribute.

    Over the years, there have been code contributions by e.g. Fairwaves. They, too, build GSM base station hardware and use that as a means to not only recover the R&D on the hardware, but also to contribute to Osmocom. At some point a few years ago, there was a lot of work from them in the area of OsmoTRX, OsmoBTS and OsmoPCU. Unfortunately, in more recent years, they have not been able to keep up the level of contributions.

    There are other companies engaged in activities with and around Osmcoom. There's Rhizomatica, an NGO helping indigenous communities to run their own cellular networks. They have been funding some of our efforts, but being an NGO helping rural regions in developing countries, they of course also don't have the deep pockets. Ideally, we'd want to be the ones contributing to them, not the other way around.

    State of funding

    We're making some progress in securing funding from players we cannot name [4] during recent years. We're also making occasional progress in convincing BTS suppliers to chip in their share. Unfortunately there are more who don't live up to their responsibility than those who do. I might start calling them out by name one day. The wider community and the public actually deserves to know who plays by FOSS rules and who doesn't. That's not shaming, it's just stating bare facts.

    Which brings us to:

    • sysmocom is in an office that's actually too small for the team, equipment and stock. But we certainly cannot afford more space.
    • we cannot pay our employees what they could earn working at similar positions in other companies. So working at sysmocom requires dedication to the cause :)
    • Holger and I have invested way more time than we have ever paid us, even more so considering the opportunity cost of what we would have earned if we'd continued our freelance Open Source hacker path
    • we're [just barely] managing to pay for 6 developers dedicated to Osmocom development on our payroll based on the various funding sources indicated above

    Nevertheless, I doubt that any such a small team has ever implemented an end-to-end GSM/GPRS/EGPRS network from RAN to Core at comparative feature set. My deepest respects to everyone involved. The big task now is to make it sustainable.

    Summary

    So as you can see, there's quite a bit of funding around. However, it always falls short of what's needed to implement all parts properly, and even not quite sufficient to keep maintaining the status quo in a proper and tested way. That can often be frustrating (mostly to us but sometimes also to users who run into regressions and oter bugs). There's so much more potential. So many things we wanted to add or clean up for a long time, but too little people interested in joining in, helping out - financially or by writing code.

    On thing that is often a challenge when dealing with traditional customers: We are not developing a product and then selling a ready-made product. In fact, in FOSS this would be more or less suicidal: We'd have to invest man-years upfront, but then once it is finished, everyone can use it without having to partake in that investment.

    So instead, the FOSS model requires the customers/users to chip in early during the R&D phase, in order to then subsequently harvest the fruits of that.

    I think the lack of a FOSS mindset across the cellular / telecom industry is the biggest constraining factor here. I've seen that some 20-15 years ago in the Linux world. Trust me, it takes a lot of dedication to the cause to endure this lack of comprehension so many years later.

    [1]just like Linux has started out.
    [2]while you will not find a lot of commits from Dieter in the code, he has been playing a key role in doing a lot of prototyping, reverse engineering and debugging!
    [3]sysmocom is 100% privately held by Holger and me, we intentionally have no external investors and are proud to never had to take a bank loan. So all we could invest was our own money and, most of all, time.
    [4]contrary to the FOSS world, a lot of aspects are confidential in business, and we're not at liberty to disclose the identities of all our customers

    by Harald Welte at June 15, 2017 10:00 PM

    FOSS misconceptions, still in 2017

    The lack of basic FOSS understanding in Telecom

    Given that the Free and Open Source movement has been around at least since the 1980ies, it puzzles me that people still seem to have such fundamental misconceptions about it.

    Something that really triggered me was an article at LightReading [1] which quotes Ulf Ewaldsson, a leading Ericsson excecutive with

    "I have yet to understand why we would open source something we think is really good software"

    This completely misses the point. FOSS is not about making a charity donation of a finished product to the planet.

    FOSS is about sharing the development costs among multiple players, and avoiding that everyone has to reimplement the wheel. Macro-Economically, it is complete and utter nonsense that each 3GPP specification gets implemented two dozens of times, by at least a dozen of different entities. As a result, products are way more expensive than needed.

    If large Telco players (whether operators or equipment manufacturers) were to collaboratively develop code just as much as they collaboratively develop the protocol specifications, there would be no need for replicating all of this work.

    As a result, everyone could produce cellular network elements at reduced cost, sharing the R&D expenses, and competing in key areas, such as who can come up with the most energy-efficient implementation, or can produce the most reliable hardware, the best receiver sensitivity, the best and most fair scheduling implementation, or whatever else. But some 80% of the code could probably be shared, as e.g. encoding and decoding messages according to a given publicly released 3GPP specification document is not where those equipment suppliers actually compete.

    So my dear cellular operator executives: Next time you're cursing about the prohibitively expensive pricing that your equipment suppliers quote you: You only have to pay that much because everyone is reimplementing the wheel over and over again.

    Equally, my dear cellular infrastructure suppliers: You are all dying one by one, as it's hard to develop everything from scratch. Over the years, many of you have died. One wonders, if we might still have more players left, if some of you had started to cooperate in developing FOSS at least in those areas where you're not competing. You could replicate what Linux is doing in the operating system market. There's no need in having a phalanx of different proprietary flavors of Unix-like OSs. It's way too expansive, and it's not an area in which most companies need to or want to compete anyway.

    Management Summary

    You don't first develop and entire product until it is finished and then release it as open source. This makes little economic sense in a lot of cases, as you've already invested into developing 100% of it. Instead, you actually develop a new product collaboratively as FOSS in order to not have to invest 100% but maybe only 30% or even less. You get a multitude of your R&D investment back, because you're not only getting your own code, but all the other code that other community members implemented. You of course also get other benefits, such as peer review of the code, more ideas (not all bright people work inside one given company), etc.

    [1]that article is actually a heavily opinionated post by somebody who appears to be pushing his own anti-FOSS agenda for some time. The author is misinformed about the fact that the TIP has always included projects under both FRAND and FOSS terms. As a TIP member I can attest to that fact. I'm only referencing it here for the purpose of that that Ericsson quote.

    by Harald Welte at June 15, 2017 10:00 PM

    June 13, 2017

    Free Electrons

    Elixir Cross Referencer: new way to browse kernel sources

    Today, we are pleased to announce the initial release of the Elixir Cross-Referencer, or just “Elixir”, for short.

    What is Elixir?

    Elixir home pageSince 2006, we have provided a Linux source code cross-referencing online tool as a service to the community. The engine behind this website was LXR, a Perl project almost as old as the kernel itself. For the first few years, we used the then-current 0.9.5 version of LXR, but in early 2009 and for various reasons, we reverted to the older 0.3.1 version (from 1999!). In a nutshell, it was simpler and it scaled better.

    Recently, we had the opportunity to spend some time on it, to correct a few bugs and to improve the service. After studying the Perl source code and trying out various cross-referencing engines (among which LXR 2.2 and OpenGrok), we decided to implement our own source code cross-referencing engine in Python.

    Why create a new engine?

    Our goal was to extend our existing service (support for multiple projects, responsive design, etc.) while keeping it simple and fast. When we tried other cross-referencing engines, we were dissatisfied with their relatively low performance on a large codebase such as Linux. Although we probably could have tweaked the underlying database engine for better performance, we decided it would be simpler to stick to the strategy used in LXR 0.3: get away from the relational database engine and keep plain lists in simple key-value stores.

    Another reason that motivated a complete rewrite was that we wanted to provide an up-to-date reference (including the latest revisions) while keeping it immutable, so that external links to the source code wouldn’t get broken in the future. As a direct consequence, we would need to index many different revisions for each project, with potentially a lot of redundant information between them. That’s when we realized we could leverage the data model of Git to deal with this redundancy in an efficient manner, by indexing Git blobs, which are shared between revisions. In order to make sure queries under this strategy would be fast enough, we wrote a proof-of-concept in Python, and thus Elixir was born.

    What service does it provide?

    First, we tried to minimize disruption to our users by keeping the user interface close to that of our old cross-referencing service. The main improvements are:

    • We now support multiple projects. For now, we provide reference for Linux, Busybox and U-Boot.
    • Every tag in each project’s git repository is now automatically indexed.
    • The design has been modernized and now fits comfortably on smaller screens like tablets.
    • The URL scheme has been simplified and extended with support for multiple projects. An HTTP redirector has been set up for backward compatibility.
    Elixir supports multiple projects

    Elixir supports multiple projects

    Among other smaller improvements, it is now possible to copy and paste code directly without line numbers getting in the way.

    How does it work?

    Elixir is made of two Python scripts: “update” and “query”. The first looks for new tags and new blobs inside a Git repository, parses them and appends the new references to identifiers to a record inside the database. The second uses the database and the Git repository to display annotated source code and identifier references.

    The parsing itself is done with Ctags, which provides us with identifier definitions. In order to find the references to these identifiers, Elixir then simply checks each lexical token in the source file against the definition database, and if that word is defined, a new reference is added.

    Like in LXR 0.3, the database structure is kept very simple so that queries don’t have much work to do at runtime, thus speeding them up. In particular, we store references to a particular identifier as a simple list, which can be loaded and parsed very fast. The main difference with LXR is that our list includes references from every blob in the project, so we need to restrict it first to only the blobs that are part of the current version. This is done at runtime, simply by computing the intersection of this list with the list of blobs inside the current version.

    Finally, we kept the user interface code clearly segregated from the engine itself by making these two modules communicate through a Unix command-line interface. This means that you can run queries directly on the command-line without going through the web interface.

    Elixir code example

    Elixir code example

    What’s next?

    Our current focus is on improving multi-project support. In particular, each project has its own quirky way of using Git tags, which needs to be handled individually.

    At the user-interface level, we are evaluating the possibility of having auto-completion and/or fuzzy search of identifier names. Also, we are looking for a way to provide direct line-level access to references even in the case of very common identifiers.

    On the performance front, we would like to cut the indexation time by switching to a new database back-end that provides efficient appending to large records. Also, we could make source code queries faster by precomputing the references, which would also allow us to eliminate identifier “bleeding” between versions (the case where an identifier shows up as “defined in 0 files” because it is only defined in another version).

    If you think of other ways we could improve our service, don’t hesitate to drop us a feature request or a patch!

    Bonus: why call it “Elixir”?

    In the spur of the moment, it seemed like a nice pun on the name “LXR”. But in retrospect, we wish to apologize to the Elixir language team and the community at large for unnecessary namespace pollution.

    by Mikael Bouillot at June 13, 2017 07:39 AM

    June 09, 2017

    Free Electrons

    Beyond boot testing: custom tests with LAVA

    Since April 2016, we have our own automated testing infrastructure to validate the Linux kernel on a large number of hardware platforms. We use this infrastructure to contribute to the KernelCI project, which tests every day the Linux kernel. However, the tests being done by KernelCI are really basic: it’s mostly booting a basic Linux system and checking that it reaches a shell prompt.

    However, LAVA, the software component at the core of this testing infrastructure, can do a lot more than just basic tests.

    The need for custom tests

    With some of our engineers being Linux maintainers and given all the platforms we need to maintain for our customers, being able to automatically test specific features beyond a simple boot test was a very interesting goal.

    In addition, manually testing a kernel change on a large number of hardware platforms can be really tedious. Being able to quickly send test jobs that will use an image you built on your machine can be a great advantage when you have some new code in development that affects more than one board.

    We identified two main use cases for custom tests:

    • Automatic tests to detect regression, as does KernelCI, but with more advanced tests, including platform specific tests.
    • Manual tests executed by engineers to validate that the changes they are developing do not break existing features, on all platforms.

    Overall architecture

    Several tools are needed to run custom tests:

    • The LAVA instance, which controls the hardware platforms to be tested. See our previous blog posts on our testing hardware infrastructrure and software architecture
    • An appropriate root filesystem, that contains the various userspace programs needed to execute the tests (benchmarking tools, validation tools, etc.)
    • A test suite, which contains various scripts executing the tests
    • A custom test tool that glues together the different components

    The custom test tool knows all the hardware platforms available and which tests and kernel configurations apply to which hardware platforms. It identifies the appropriate kernel image, Device Tree, root filesystem image and test suite and submits a job to LAVA for execution. LAVA will download the necessary artifacts and run the job on the appropriate device.

    Building custom rootfs

    When it comes to test specific drivers, dedicated testing, validation or benchmarking tools are sometimes needed. For example, for storage device testing, bonnie++ can be used, while iperf is nice for networking testing. As the default root filesystem used by KernelCI is really minimalist, we need to build our owns, one for each architecture we want to test.

    Buildroot is a simple yet efficient tool to generate root filesystems, it is also used by KernelCI to build their minimalist root filesystems. We chose to use it and made custom configuration files to match our needs.

    We ended up with custom rootfs built for ARMv4, ARMv5, ARMv7, and ARMv8, that embed for now Bonnie++, iperf, ping (not the Busybox implementation) and other tiny tools that aren’t included in the default Buildroot configuration.

    Our Buildroot fork that includes our custom configurations is available as the buildroot-ci Github project (branch ci).

    The custom test tool

    The custom test tool is the tool that binds the different elements of the overall architecture together.

    One of the main features of the tool is to send jobs. Jobs are text files used by LAVA to know what to do with which device. As they are described in LAVA as YAML files (in the version 2 of the API), it is easy to use templates to generate them based on a single model. Some information is quite static such as the device tree name for a given board or the rootfs version to use, but other details change for every job such as the kernel to use or which test to run.

    We made a tool able to get the latest kernel images from KernelCI to quickly send jobs without having a to compile a custom kernel image. If the need is to test a custom image that is built locally, the tool is also able to send files to the LAVA server through SSH, to provide a custom kernel image.

    The entry point of the tool is ctt.py, which allows to create new jobs, providing a lot of options to define the various aspects of the job (kernel, Device Tree, root filesystem, test, etc.).

    This tool is written in Python, and lives in the custom_tests_tool Github project.

    The test suite

    The test suite is a set of shell scripts that perform tests returning 0 or 1 depending on the result. This test suite is included inside the root filesystem by LAVA as part of a preparation step for each job.

    We currently have a small set of tests:

    • boot test, which simply returns 0. Such a test will be successful as soon as the boot succeeds.
    • mmc test, to test MMC storage devices
    • sata test, to test SATA storage devices
    • crypto test, to do some minimal testing of cryptographic engines
    • usb test, to test USB functionality using mass storage devices
    • simple network test, that just validates network connectivity using ping

    All those tests only require the target hardware platform itself. However, for more elaborate network tests, we needed to get two devices to interact with each other: the target hardware platform and a reference PC platform. For this, we use the LAVA MultiNode API. It allows to have a test that spans multiple devices, which we use to perform multiple iperf sessions to benchmark the bandwidth. This test has therefore one part running on the target device (network-board) and one part running on the reference PC platform (network-laptop).

    Our current test suite is available as the test_suite Github project. It is obviously limited to just a few tests for now, we hope to extend the tests in the near future.

    First use case: daily tests

    As previously stated, it’s important for us to know about regressions introduced in the upstream kernel. Therefore, we have set up a simple daily cron job that:

    • Sends custom jobs to all boards to validate the latest mainline Linux kernel and latest linux-nextli>
    • Aggregates results from the past 24 hours and sends emails to subscribed addresses
    • Updates a dashboard that displays results in a very simple page

    A nice dashboard showing the tests of the Beaglebone Black and the Nitrogen6x.

    Second use case: manual tests

    The custom test tool ctt.py has a simple command line interface. It’s easy for someone to set it up and send custom jobs. For example:

    ctt.py -b beaglebone-black -m network
    

    will start the network test on the BeagleBone Black, using the latest mainline Linux kernel built by KernelCI. On the other hand:

    ctt.py -b armada-7040-db armada-8040-db -t mmc --kernel arch/arm64/boot/Image --dtb-folder arch/arm64/boot/dts/
    

    will run the mmc test on the Marvell Armada 7040 and Armada 8040 development boards, using the locally built kernel image and Device Tree.

    The result of the job is sent over e-mail when the test has completed.

    Conclusion

    Thanks to this custom test tool, we now have an infrastructure that leverages our existing lab and LAVA instance to execute more advanced tests. Our goal is now to increase the coverage, by adding more tests, and run them on more devices. Of course, we welcome feedback and contributions!

    by Florent Jacquet at June 09, 2017 02:25 PM

    May 30, 2017

    Bunnie Studios

    Name that Ware May 2017

    The Ware for May 2017 is shown below.

    This is another one where the level difficulty will depend on if I cropped enough detail out of the photo to make it challenging but not impossible. If you do figure this one out quickly, curious to hear which detail tipped you off!

    by bunnie at May 30, 2017 08:04 AM

    Winner, Name that Ware April 2017

    The Ware for April 2017 is an HP 10780A optical receiver. Congrats to Brian for absolutely nailing this one! email me for your prize.

    by bunnie at May 30, 2017 08:04 AM

    May 29, 2017

    Open Hardware Repository

    sfp-plus-i2c - SaFariPark now open for public

    SaFariPark is not a site to book holidays on the African plains - though with additional personal funding I am willing to add that feature. It is a software tool to read and write the digital interface of SFP/SFP+ transceiver modules. Together with a device to plug in multiple (4) SFP/SFP+ modules, creatively called MultiSFP (see Figure 1), it is a versatile tool for all your SFP needs. MultiSFP and SaFariPark have been developed by Nikhef as part of the ASTERICS program, and all is open hardware/open source.


    Figure 1 - MultiSFP front panel

    MultiSFP supports a 10 Gigabit capable connection to the electrical interface of each SFP. Via one USB port each SFP I2C bus can be exercised using SaFariPark. The software main window (Figure 2) exposes most functionality, which are:
    • Editing of individual fields in the SFP module
    • Fixing corrupted SFP EEPROM data, recalculating checksums
    • Showing and saving SFP+ sensor data such as TX/RX power and temperature.
    • Selectively copying content of one SFP module to multiple other modules
    • Laser tuning of optical SFP+ modules


    Figure 2 - Main window of SaFariPark

    Apart from this SaFariPark allows you to dump the entire EEPROM content, and extend the SFP+ EEPROM data dictionary with custom fields using XML. This enables users to add fields for custom or exotic SFP+ modules. As the software is written Java, it has been verified to work on Linux and Windows. Mac has not been tested yet.

    More information can be found here: Also see:

    by Vincent van Beveren (v.van.beveren@nikhef.nl) at May 29, 2017 03:07 PM

    May 28, 2017

    Harald Welte

    Playing back GSM RTP streams, RTP-HR bugs

    Chapter 0: Problem Statement

    In an all-IP GSM network, where we use Abis, A and other interfaces within the cellular network over IP transport, the audio of voice calls is transported inside RTP frames. The codec payload in those RTP frames is the actual codec frame of the respective cellular voice codec. In GSM, there are four relevant codecs: FR, HR, EFR and AMR.

    Every so often during the (meanwhile many years of ) development of Osmocom cellular infrastructure software it would have been useful to be able to quickly play back the audio for analysis of given issues.

    However, until now we didn't have that capability. The reason is relatively simple: In Osmocom, we genally don't do transcoding but simply pass the voice codec frames from left to right. They're only transcoded inside the phones or inside some external media gateway (in case of larger networks).

    Chapter 1: GSM Audio Pocket Knife

    Back in 2010, when we were very actively working on OsmocomBB, the telephone-side GSM protocol stack implementation, Sylvain Munaut wrote the GSM Audio Pocket Knife (gapk) in order to be able to convert between different formats (representations) of codec frames. In cellular communcations, everyoe is coming up with their own representation for the codec frames: The way they look on E1 as a TRAU frame is completely different from how RTP payload looks like, or what the TI Calypso DSP uses internally, or what a GSM Tester like the Racal 61x3 uses. The differences are mostly about data types used, bit-endinanness as well as padding and headers. And of course those different formats exist for each of the four codecs :/

    In 2013 I first added simplistic RTP support for FR-GSM to gapk, which was sufficient for my debugging needs back then. Still, you had to save the decoded PCM output to a file and play that back, or use a pipe into aplay.

    Last week, I picked up this subject again and added a long series of patches to gapk:

    • support for variable-length codec frames (required for AMR support)
    • support for AMR codec encode/decode using libopencore-amrnb
    • support of all known RTP payload formats for all four codecs
    • support for direct live playback to a sound card via ALSA

    All of the above can now be combined to make GAPK bind to a specified UDP port and play back the RTP codec frames that anyone sends to that port using a command like this:

    $ gapk -I 0.0.0.0/30000 -f rtp-amr -A default -g rawpcm-s16le

    I've also merged a chance to OsmoBSC/OsmoNITB which allows the administrator to re-direct the voice of any active voice channel towards a user-specified IP address and port. Using that you can simply disconnect the voice stream from its normal destination and play back the audio via your sound card.

    Chapter 2: Bugs in OsmoBTS GSM-HR

    While going through the exercise of implementing the above extension to gapk, I had lots of trouble to get it to work for GSM-HR.

    After some more digging, it seems there are two conflicting specification on how to format the RTP payload for half-rate GSM:

    In Osmocom, we claim to implement RFC5993, but it turned out that (at least) osmo-bts-sysmo (for sysmoBTS) was actually implementing the ETSI format instead.

    And even worse, osmo-bts-sysmo gets event the ETSI format wrong. Each of the codec parameters (which are unaligned bit-fields) are in the wrong bit-endianness :(

    Both the above were coincidentially also discovered by Sylvain Munaut during operating of the 32C3 GSM network in December 2015 and resulted the two following "work around" patches: * HACK for HR * HACK: Fix the bit order in HR frames

    Those merely worked around those issues in the rtp_proxy of OsmoNITB, rather than addressing the real issue. That's ok, they were "quick" hacks to get something working at all during a four-day conference. I'm now working on "real" fixes in osmo-bts-sysmo. The devil is of course in the details, when people upgrade one BTS but not the other and want to inter-operate, ...

    It yet remains to be investigated how osmo-bts-trx and other osmo-bts ports behave in this regard.

    Chapter 3: Conclusions

    Most definitely it is once again a very clear sign that more testing is required. It's tricky to see even wih osmo-gsm-tester, as GSM-HR works between two phones or even two instances of osmo-bts-sysmo, as both sides of the implementation have the same (wrong) understanding of the spec.

    Given that we can only catch this kind of bug together with the hardware (the DSP runs the PHY code), pure unit tests wouldn't catch it. And the end-to-end test is also not very well suited to it. It seems to call for something in betewen. Something like an A-bis interface level test.

    We need more (automatic) testing. I cannot say that often enough. The big challenge is how to convince contributors and customers that they should invest their time and money there, rather than yet-another (not automatically tested) feature?

    by Harald Welte at May 28, 2017 10:00 PM

    Mirko Vogt, nanl.de

    SonOTA – Flashing Itead Sonoff devices via original OTA mechanism

    Long story short

    There’s now a script with which you can flash your sonoff device via the original internal OTA upgrade mechanism, meaning, no need to open, solder, etc. the device to get your custom firmware onto it.

    This isn’t perfect (yet) — please mind the issues at the end of this post!

    https://github.com/mirko/SonOTA

    Credits

    First things first: Credits!
    The problem with credits is you usually forget somebody and that’s most likely happening here as well.
    I read around quite a lot, gathered information and partially don’t even remember anymore where I read what (first).

    Of course I’m impressed by the entire Tasmota project and what it enables one to do with the Itead Sonoff and similar devices.

    Special thanks go to khcnz who helped me a lot in a discussion documented here.

    I’d also like to mention Richard Burtons, who I didn’t interact with directly but only read his blog. That guy apparently was too bored by all the amazing tech stuff he was doing for a living, so he took a medical degree and is now working as a doctor, has a passion for horology (meaning, he’s building a turrot clock), is sailing regattas with his own rs200, decompiles and reverse-engineers proprietary bootloaders in his spare time and writes a new bootloader called rboot for the ESP8266 as a side project.

    EDIT: Jan Almeroth already reversed some of the protocol in 2016 and also documented the communication between the proprietary EWeLink app and the AWS cloud. Unfortunately I only became aware of that great post after I already finished mine.

    Introduction Sonoff devices

    Quite recently the Itead Sonoff series — a bunch of ESP8266 based IoT homeautomation devices — was brought to my attention.

    The ESP8266 is a low-power consumption SoC especially designed for IoT purposes. It’s sold by Espressif, running a 32-Bit processor featuring the Xtensa instruction set (licensed from Tensilica) and having an ASIC IP core and WiFi onboard.

    Those Sonoff devices using this SoC basically expect high voltage input, therewith having an AC/DC (5V) converter, the ESP8266 SoC and a relais switching the high voltage output.
    They’re sold as wall switches (“Sonoff Touch”), E27 socket adapters (“Slampher”), power sockets (“S20 smart socket”) or as just — that’s most basic cheapest model — all that in a simple case (“Sonoff Basic”).
    They also have a bunch of sensoric devices, measuring temperature, power comsumption, humidty, noise levels, fine dust, etc.

    Though I’m rather sceptical about the whole IoT (development) philosophy, I always was (and still am) interested into low-cost and power-saving home automation which is completely and exclusively under my control.

    That implies I’m obviously not interested in some random IoT devices being necessarily connected to some Google/Amazon/Whatever cloud, even less if sensible data is transmitted without me knowing (but very well suspecting) what it’s used for.

    Guess what the Itead Sonoff devices do? Exactly that! They even feature Amazon Alexa and Google Nest support! And of course you have to use their proprietary app to confgure and control your devices — via the Amazon cloud.

    However, as said earlier, they’re based on the ESP8266 SoC, around which a great deal of OpenSource projects evolved. For some reason especially the Arduino community pounced on that SoC, enabling a much broader range of people to play around with and program for those devices. Whether that’s a good and/or bad thing is surely debatable.

    I’ll spare you the details about all the projects I ran into, there’s plenty of cool stuff out there.

    I decided to go for the Sonoff-Tasmota project which is quite actively developed and supports most of the currently available Sonoff devices.

    It provides an HTTP and MQTT interface and doesn’t need any connection to the internet at all. As MQTT sever (in MQTT speech called broker) I use mosquitto which I’m running on my OpenWrt WiFi router.

    Flashing custom firmware (via serial)

    Flashing your custom firmware onto those devices however always requires opening them, soldering a serial cable, pulling GPIO0 down to get the SoC into programming mode (which, depending on the device type, again involes soldering) and then flash your firmware via serial.

    Side note: Why do all those projects describing the flashing procedure name an “FTDI serial converter” as a requirement? Every serial TTL converter does the job.
    And apart from that FTDI is not a product but a company, it’s a pretty shady one. I’d just like to remind of the “incident” where FTDI released new drivers for their chips which intentionally bricked clones of their converters.

    How to manually flash via serial — even though firmware replacement via OTA (kinda) works now, you still might want unbrick or debug your device — the Tasmota wiki provides instructions for each of the supported devices.

    Anyway, as I didn’t want to open and solder every device I intend to use, I took a closer look at the original firmware and its OTA update mechanism.

    Protocol analysis

    First thing after the device is being configured (meaning, the device got configured by the proprietary app and is therewith now having internet access via your local WiFi network) is to resolve the hostname `eu-disp.coolkit.cc` and attempt to establish a HTTPS connection.

    Though the connection is SSL, it doesn’t do any server certificate verification — so splitting the SSL connection and *man-in-the-middle it is fairly easy.

    As a side effect I ported the mitm project sslsplit to OpenWrt and created a seperate “interception”-network on my WiFi router. Now I only need to join that WiFi network and all SSL connections get split, its payload logged and being provided on an FTP share. Intercepting SSL connections never felt easier.

    Back to the protocol: We’re assuming at this point the Sonoff device was already configured (e.g. by the official WeLink app) which means it has joined our WiFi network, acquired IP settings via DHCP and has access to the internet.

    The Sonoff device sends a dispatch call as HTTPS POST request to eu-disp.coolkit.cc including some JSON encoded data about itself:

    
    POST /dispatch/device HTTP/1.1
    Host: eu-disp.coolkit.cc
    Content-Type: application/json
    Content-Length: 152
    
    {
      "accept":     "ws;2",
      "version":    2,
      "ts":         119,
      "deviceid":   "100006XXXX",
      "apikey":     "6083157d-3471-4f4c-8308-XXXXXXXXXXXX",
      "model":      "ITA-GZ1-GL",
      "romVersion": "1.5.5"
    }
    

    It expects an also JSON encoded host as an answer
    
    HTTP/1.1 200 OK
    Server: openresty
    Date: Mon, 15 May 2017 01:26:00 GMT
    Content-Type: application/json
    Content-Length: 55
    Connection: keep-alive
    
    {
      "error":  0,
      "reason": "ok",
      "IP":     "52.29.48.55",
      "port":   443
    }
    

    which is used to establish a WebSocket connection
    
    GET /api/ws HTTP/1.1
    Host: iotgo.iteadstudio.com
    Connection: upgrade
    Upgrade: websocket
    Sec-WebSocket-Key: ITEADTmobiM0x1DaXXXXXX==
    Sec-WebSocket-Version: 13
    

    
    HTTP/1.1 101 Switching Protocols
    Upgrade: websocket
    Connection: Upgrade
    Sec-WebSocket-Accept: q1/L5gx6qdQ7y3UWgO/TXXXXXXA=
    

    which consecutively will be used for further interchange.
    Payload via the established WebSocket channel continues to be encoded in JSON.
    The messages coming from the device can be classified into action-requests initiated by the device (which expect ackknowledgements by the server) and acknowledgement messages for requests initiated by the server.

    The first requests are action-requests coming from the device:

    1) action: register

    {
      "userAgent":  "device",
      "apikey":     "6083157d-3471-4f4c-8308-XXXXXXXXXXXX",
      "deviceid":   "100006XXXX",
      "action":     "register",
      "version":    2,
      "romVersion": "1.5.5",
      "model":      "ITA-GZ1-GL",
      "ts":         712
    }

    responded by the server with
    {
      "error":       0,
      "deviceid":   "100006XXXX",
      "apikey":     "85036160-aa4a-41f7-85cc-XXXXXXXXXXXX",
      "config": {
        "hb":         1,
        "hbInterval": 145
      }
    }

    As can be seen, action-requests initiated from server side also have an apikey field which can be — as long its used consistently in that WebSocket session — any generated UUID but the one used by the device.

    2) action: date

    {
      "userAgent":  "device",
      "apikey":     "85036160-aa4a-41f7-85cc-XXXXXXXXXXXX",
      "deviceid":   "100006XXXX",
      "action"      :"date"
    }

    responded with
    {
      "error":      0,
      "deviceid":   "100006XXXX",
      "apikey":     "85036160-aa4a-41f7-85cc-XXXXXXXXXXXX",
      "date":       "2017-05-15T01:26:01.498Z"
    }

    Pay attention to the date format: it is some kind ISO 8601 but the parser is really picky about it. While python’s datetime.isoformat() function e.g. returns a string taking microseconds into account, the parser on the device will just fail parsing that. It also always expects the actually optional timezone being specified as UTC and only as a trailing Z (though according to the spec “00:00” would be valid as well).

    3) action: update — the device tells the server its switch status, the MAC address of the accesspoint it is connected to, signal quality, etc.
    This message also appears everytime the device status changes, e.g. it got switched on/off via the app or locally by pressing the button.

    {
      "userAgent":      "device",
      "apikey":         "85036160-aa4a-41f7-85cc-XXXXXXXXXXXX",
      "deviceid":       "100006XXXX",
      "action":         "update",
      "params": {
        "switch":         "off",
        "fwVersion":      "1.5.5",
        "rssi":           -41,
        "staMac":         "5C:CF:7F:F5:19:F8",
        "startup":        "off"
      }
    }

    simply acknowlegded with
    {
      "error":      0,
      "deviceid":   "100006XXXX",
      "apikey":     "85036160-aa4a-41f7-85cc-XXXXXXXXXXXX"
    }

    4) action: query — the device queries potentially configured timers
    {
      "userAgent":  "device",
      "apikey":     "85036160-aa4a-41f7-85cc-XXXXXXXXXXXX",
      "deviceid":   "100006XXXX",
      "action":     "query",
      "params": [
        "timers"
      ]
    }

    as there are no timers configured the answer simply contains a "params":0 KV-pair
    {
      "error":      0,
      "deviceid":   "100006XXXX",
      "apikey":     "85036160-aa4a-41f7-85cc-XXXXXXXXXXXX",
      "params":     0
    }

    That’s it – that’s the basic handshake after the (configured) device powers up.

    Now the server can tell the device to do stuff.

    The sequence number is used by the device to acknowledge particular action-requests so the response can be mapped back to the actual request. It appears to be a UNIX timestamp with millisecond precision which doesn’t seem like the best source for generating a sequence number (duplicates, etc.) but seems to work well enough.

    Let’s switch the relais:

    {
      "action":     "update",
      "deviceid":   "100006XXXX",
      "apikey":     "85036160-aa4a-41f7-85cc-XXXXXXXXXXXX",
      "userAgent":  "app",
      "sequence":   "1494806715179",
      "ts":         0,
      "params": {
        "switch":     "on"
      },
      "from":       "app"
    }
    
    {
      "action":     "update",
      "deviceid":   "100006XXXX",
      "apikey":     "85036160-aa4a-41f7-85cc-XXXXXXXXXXXX",
      "userAgent":  "app",
      "sequence":   "1494806715193",
      "ts":         0,
      "params": {
        "switch":     "off"
      },
      "from":       "app"
    }

    As mentioned earlier, each action-request is responded with proper acknowledgements.

    And — finally — what the server now also is capable doing is to tell the device to update itself:

    {
      "action":     "upgrade",
      "deviceid":   "100006XXXX",
      "apikey":     "85036160-aa4a-41f7-85cc-XXXXXXXXXXXX",
      "userAgent":  "app",
      "sequence":   "1494802194654",
      "ts":         0,
      "params": {
        "binList":[
          {
            "downloadUrl":  "http://52.28.103.75:8088/ota/rom/xpiAOwgVUJaRMqFkRBsoI4AVtnozgwp1/user1.1024.new.2.bin",
            "digest":       "1aee969af1daf96f3f120323cd2c167ae1aceefc23052bb0cce790afc18fc634",
            "name":         "user1.bin"
          },
          {
            "downloadUrl":  "http://52.28.103.75:8088/ota/rom/xpiAOwgVUJaRMqFkRBsoI4AVtnozgwp1/user2.1024.new.2.bin",
            "digest":       "6c4e02d5d5e4f74d501de9029c8fa9a7850403eb89e3d8f2ba90386358c59d47",
            "name":         "user2.bin"
          }
        ],
        "model":    "ITA-GZ1-GL",
        "version":  "1.5.5",
      }
    }

    After successful download and verification of the image’s checksum the device returns:
    {
      "error":      0,
      "userAgent":  "device",
      "apikey":     "85036160-aa4a-41f7-85cc-XXXXXXXXXXXX",
      "deviceid":   "100006XXXX",
      "sequence":   "1495932900713"
    }

    The downloadUrl field should be self-explanatory (the following HTTP GET request to those URLs contain some more data as CGI parameters which however can be ommitted).

    The digest is a sha256 hash of the file and the name is the partition onto which the file should be written on.

    Implementing server side

    After some early approaches I decided to go for a Python implementation using the tornado webserver stack.
    This decision was mainly based on it providing functionality for HTTP (obviously) as well as websockets and asynchronous handling of requests.

    The final script can be found here: https://github.com/mirko/SonOTA

    ==> Trial & Error

    1st attempt

    As user1.1024.new.2.bin and user2.20124.new.2.bin almost look the same, let’s just use the same image for both, in this case a tasmota build:

    MOEP! Boot fails.

    Reason: The tasmota build also contains the bootloader which the Espressif OTA mechanism doesn’t expect being in the image.

    2nd attempt

    Chopping off the first 0x1000 bytes which contain the bootloader plus padding (filled up with 0xAA bytes).

    MOEP! Boot fails.

    Boot mode 1 and 2 / v1 and v2 image headers

    The (now chopped) image and the original upgrade images appear to have different headers — even the very first byte (the files’ magic byte) differ.

    The original image starts with 0xEA while the Tasmota build starts with 0xE9.

    Apparently there are two image formats (called v1 and v2 or boot mode 1 and boot mode 2).
    The former (older) one — used by Arduino/Tasmota — starts with 0xE9, while the latter (and apparently newer one) — used by the original firmware — starts with 0xEA.

    The technical differences are very well documented by the ESP8266 Reverse Engineering Wiki project, regarding the flash format and the v1/v2 headers in particular the SPI Flash Format wiki oage.

    The original bootloader only accepts images starting with 0xEA while the bootloader provided by Arduino/Tasmota only accepts such starting with 0xE9.

    3rd attempt

    Converting Arduino images to v2 images

    Easier said than done, as the Arduino framework doesn’t seem to be capable of creating v2 images and none of the common tools appear to have conversion functionality.

    Taking a closer look at the esptool.py project however, there seems to be (undocumented) functionality.
    esptool.py has the elf2image argument which — according source — allows switching between conversion to v1 and v2 images.

    When using elf2image and also passing the --version parameter — which normally prints out the version string of the tool — the --version parameter gets redefined and expects an then argument: 1 or 2.

    Besides the sonoff.ino.bin file the Tasmota project also creates an sonoff.ino.elf which can now be used in conjunction with esptool.py and the elf2image-parameter to create v2 images.

    Example: esptool.py elf2image --version 2 tmp/arduino_build_XXXXXX/sonoff.ino.elf

    WORKS! MOEP! WORKS! MOEP!

    Remember the upgrade-action passed a 2-element list of download URLs to the device, having different names (user1.bin and user2.bin)?

    This procedure now only works if the user1.bin image is being fetched and flashed.

    Differences between user1.bin and user2.bin

    The flash on the Sonoff devices is split into 2 parts (simplified!) which basically contain the same data (user1 and user2). As OTA upgrades are proven to fail sometimes for whatever reason, the upgrade will always happen on the currently inactive part, meaning, if the device is currently running the code from the user1 part, the upgrade will happen onto the user2 part.
    That mechanism is not invented by Itead, but actually provided as off-the-shelf OTA solution by Espressif (the SoC manufacturer) itself.

    For 1MB flash chips the user1 image is stored at offset 0x01000 while the user2 image is stored at 0x81000.

    And indeed, the two original upgrade images (user1 and user2) differ significantly.

    If flashing a user2 image onto the user1 part of the flash the device refuses to boot and vice versa.

    While there’s not much information about how user1.bin and user2.bin technically differ from each other, khcnz pointed me to an Espressif document stating:

    user1.bin and user2.bin are [the] same software placed to different regions of [the] flash. The only difference is [the] address mapping on flash.

    4th attempt

    So apparently those 2 images must be created differently indeed.

    Again it was khcnz who pointed me to different linker scripts used for each image within the original SDK.
    Diffing
    https://github.com/espressif/ESP8266_RTOS_SDK/blob/master/ld/eagle.app.v6.new.1024.app1.ld
    and
    https://github.com/espressif/ESP8266_RTOS_SDK/blob/master/ld/eagle.app.v6.new.1024.app2.ld
    reveals that the irom0_0_seg differs (org = 0x40100000 vs. org = 0x40281010).

    As Tasmota doesn’t make use of the user1-/user2-ping-pong mechanism it conly creates images supposed to go to 0x1000 (=user1-partition).

    So for creating an user2.bin image — in our case for a device having a 1MB flash chip and allocating (only) 64K for SPIFFS — we have to modify the following linker script accordingly:

    --- a/~/.arduino15/packages/esp8266/hardware/esp8266/2.3.0/tools/sdk/ld/eagle.flash.1m64.ld
    +++ b/~/.arduino15/packages/esp8266/hardware/esp8266/2.3.0/tools/sdk/ld/eagle.flash.1m64.ld
    @@ -7,7 +7,7 @@ MEMORY
       dport0_0_seg :                        org = 0x3FF00000, len = 0x10
       dram0_0_seg :                         org = 0x3FFE8000, len = 0x14000
       iram1_0_seg :                         org = 0x40100000, len = 0x8000
    -  irom0_0_seg :                         org = 0x40201010, len = 0xf9ff0
    +  irom0_0_seg :                         org = 0x40281010, len = 0xf9ff0
     }
     
     PROVIDE ( _SPIFFS_start = 0x402FB000 );

    So we will now create an user1 (without above applied modification> and an user2 (with above modification> image and converting them to v2 images with esptool.py as described above.

    –> WORKS!

    Depending on whether the original firmware was loaded from the user1 or user2 partition, it will fetch and flash the other image, telling the bootloader afterwards to change the active partition.

    Issues

    Mission accomplished? Not just yet…

    Although our custom firmware is now flashed via the original OTA mechanism and running, the final setup differs in 2 major aspects (compared to if we would have flashed the device via serial):

    • The bootloader is still the original one
    • Our custom image might have ended up in the user2 partition

    Each point alone already results in the Tasmota/Adruino OTA mechniasm not working.
    Additionally — since the bootloader stays the original one — it still only expects v2 images and still messes with us with its ping-pong-mechanism.

    This issue is already being addressed though and discussed on how to be solved best in the issue ticket mentioned at the very beginning.

    Happy hacking!

    by mirko at May 28, 2017 08:05 PM

    May 26, 2017

    Open Hardware Repository

    Hdlmake - HDLMake version 3.0 promoted to Master

    HDLMake 3.0

    After a massive refactoring & upgrade process, we have finally published the brand-new HDLMake 3.0 version. This version not only sports a whole set of new features, but has been carefully crafted so that the source code providing a common interface for the growing set of supported tools can be easily maintained.

    New Features

    These are some of the highlighted features for the new HDLMake v3.0 Release:

    • Updated HDL code parser and solver: the new release includes by default the usage of an embedded HDL code parser and file dependency solver to manage the synthesis and simulation process in an optimal way.
    • Support for Python 3.x: the new release supports both Python2.7 and Python3.x deployments in a single source code branch, enabling an easier integration into newer O.S. distributions.
    • Native support for Linux & Windows shells: The new release not only supports Linux shells as the previous ones, but features native support too for Windows shells such as the classic CMD promt or the new PowerShell.
    • TCL based Makefiles: in order to streamline the process of supporting as many tools as possible in a hierarchical way, in a changing world and rapidly evolving world of FPGA technology and tool providers, we have adopted TCL as the common language layer used by the generated synthesis Makefiles.
    • Proper packaging: from the HDLMake 3.0 onwards, the source code is distributed as a Python package, what allows for a much cleaner installation procedure.

    More info

    You can find more info about the HDLMake 3.0 version in the following links:

    by Javier D. Garcia-Lasheras (jgarcia@gl-research.com) at May 26, 2017 01:35 PM

    May 23, 2017

    Harald Welte

    Power-cycling a USB port should be simple, right?

    Every so often I happen to be involved in designing electronics equipment that's supposed to run reliably remotely in inaccessible locations,without any ability for "remote hands" to perform things like power-cycling or the like. I'm talking about really remote locations, possible with no but limited back-haul, and a very high cost of ever sending somebody there for remote maintenance.

    Given that a lot of computer peripherals (chips, modules, ...) use USB these days, this is often some kind of an embedded ARM (rarely x86) SoM or SBC, which is hooked up to a custom board that contains a USB hub chip as well as a line of peripherals.

    One of the most important lectures I've learned from experience is: Never trust reset signals / lines, always include power-switching capability. There are many chips and electronics modules available on the market that have either no RESET, or even might claim to have a hardware RESET line which you later (painfully) discover just to be a GPIO polled by software which can get stuck, and hence no way to really hard-reset the given component.

    In the case of a USB-attached device (even though the USB might only exist on a circuit board between two ICs), this is typically rather easy: The USB hub is generally capable of switching the power of its downstream ports. Many cheap USB hubs don't implement this at all, or implement only ganged switching, but if you carefully select your USB hub (or in the case of a custom PCB), you can make sure that the given USB hub supports individual port power switching.

    Now the next step is how to actually use this from your (embedded) Linux system. It turns out to be harder than expected. After all, we're talking about a standard feature that's present in the USB specifications since USB 1.x in the late 1990ies. So the expectation is that it should be straight-forward to do with any decent operating system.

    I don't know how it's on other operating systems, but on Linux I couldn't really find a proper way how to do this in a clean way. For more details, please read my post to the linux-usb mailing list.

    Why am I running into this now? Is it such a strange idea? I mean, power-cycling a device should be the most simple and straight-forward thing to do in order to recover from any kind of "stuck state" or other related issue. Logical enabling/disabling of the port, resetting the USB device via USB protocol, etc. are all just "soft" forms of a reset which at best help with USB related issues, but not with any other part of a USB device.

    And in the case of e.g. an USB-attached cellular modem, we're actually talking about a multi-processor system with multiple built-in micro-controllers, at least one DSP, an ARM core that might run another Linux itself (to implement the USB gadget), ... - certainly enough complex software that you would want to be able to power-cycle it...

    I'm curious what the response of the Linux USB gurus is.

    by Harald Welte at May 23, 2017 10:00 PM

    Open Hardware Repository

    Yet Another Micro-controller - YAM first release at OHR

    YAM release V1.4 is now available from CERN OHR.

    The core has been already used in a number of designs at ESRF .

    However there is still some pending work.
    • The co-processors have not been fully tested yet.
    • The yamasm assembler doesn't yet support the 3-operand implementation.

    by Christian Herve at May 23, 2017 08:41 AM

    May 22, 2017

    Free Electrons

    Introducing lavabo, board remote control software

    In two previous blog posts, we presented the hardware and software architecture of the automated testing platform we have created to test the Linux kernel on a large number of embedded platforms.

    The primary use case for this infrastructure was to participate to the KernelCI.org testing effort, which tests the Linux kernel every day on many hardware platforms.

    However, since our embedded boards are now fully controlled by LAVA, we wondered if we could not only use our lab for KernelCI.org, but also provide remote control of our boards to Free Electrons engineers so that they can access development boards from anywhere. lavabo was born from this idea and its goal is to allow full remote control of the boards as it is done in LAVA: interface with the serial port, control the power supply and provide files to the board using TFTP.

    The advantages of being able to access the boards remotely are obvious: allowing engineers working from home to work on their hardware platforms, avoid moving the boards out of the lab and back into the lab each time an engineer wants to do a test, etc.

    User’s perspective

    From a user’s point of view, lavabo is used through the eponymous command lavabo, which allows to:

    • List the boards and their status
      $ lavabo list
    • Reserve a board for lavabo usage, so that it is no longer used for CI jobs
      $ lavabo reserve am335x-boneblack_01
    • Upload a kernel image and Device Tree blob so that it can be accessed by the board through TFTP
      $ lavabo upload zImage am335x-boneblack.dtb
    • Connect to the serial port of the board
      $ lavabo serial am335x-boneblack_01
    • Reset the power of the board
      $ lavabo reset am335x-boneblack_01
    • Power off the board
      $ lavabo power-off am335x-boneblack_01
    • Release the board, so that it can once again be used for CI jobs
      $ lavabo release am335x-boneblack_01

    Overall architecture and implementation

    The following diagram summarizes the overall architecture of lavabo (components in green) and how it connects with existing components of the LAVA architecture.

    lavabo reuses LAVA tools and configuration files

    lavabo reuses LAVA tools and configuration files

    A client-server software

    lavabo follows the classical client-server model: the lavabo client is installed on the machines of users, while the lavabo server is hosted on the same machine as LAVA. The server-side of lavabo is responsible for calling the right tools directly on the server machine and making the right calls to LAVA’s API. It controls the boards and interacts with the LAVA instance to reserve and release a board.

    On the server machine, a specific Unix user is configured, through its .ssh/authorized_keys to automatically spawn the lavabo server program when someone connects. The lavabo client and server interact directly using their stdin/stdout, by exchanging JSON dictionaries. This interaction model has been inspired from the Attic backup program. Therefore, the lavabo server is not a background process that runs permanently like traditional daemons.

    Handling serial connection

    Exchanging JSON over SSH works fine to allow the lavabo client to provide instructions to the lavabo server, but it doesn’t work well to provide access to the serial ports of the boards. However, ser2net is already used by LAVA and provides a local telnet port for each serial port. lavabo simply uses SSH port-forwarding to redirect those telnet ports to local ports on the user’s machine.

    Different ways to connect to the serial

    Different ways to connect to the serial

    Interaction with LAVA

    To use a board outside of LAVA, we have to interact with LAVA to tell him the board cannot be used anymore. We therefore had to work with LAVA developers to add endpoints for putting online (release) and for putting offline (reserve) boards and an endpoint to get the current status of a board (busy, idle or offline) in LAVA’s API.

    These additions to the LAVA API are used by the lavabo server to make reserve and release boards, so that there is no conflict between the CI related jobs (such as the ones submitted by KernelCI.org) and the direct use of boards for remote development.

    Interaction with the boards

    Now that we know how the client and the server interact and also how the server communicates with LAVA, we need a way to know which boards are in the lab, on which port the serial connection of a board is exposed and what are the commands to control the board’s power supply. All this configuration has already been given to LAVA, so lavabo server simply reads the LAVA configuration files.

    The last requirement is to provide files to the board, such as kernel images, Device Tree blobs, etc. Indeed, from a network point of view, the boards are located in a different subnet not routed directly to the users machines. LAVA already has a directory accessible through TFTP from the boards which is one of the mechanisms used to serve files to boards. Therefore, the easiest and most obvious way is to send files from the client to the server and move the files to this directory, which we implemented using SFTP.

    User authentication

    Since the serial port cannot be shared among several sessions, it is essential to guarantee a board can only be used by one engineer at a time. In order to identify users, we have one SSH key per user in the .ssh/authorized_keys file on the server, each associated to a call to the lavabo-server program with a different username.

    This allows us to identify who is reserving/releasing the boards, and make sure that serial port access, or requests to power off or reset the boards are done by the user having reserved the board.

    For TFTP, the lavabo upload command automatically uploads files into a per-user sub-directory of the TFTP server. Therefore, when a file called zImage is uploaded, the board will access it over TFTP by downloading user/zImage.

    Availability and installation

    As you could guess from our love for FOSS, lavabo is released under the GNU GPLv2 license in a GitHub repository. Extensive documentation is available if you’re interested in installing lavabo. Of course, patches are welcome!

    by Quentin Schulz at May 22, 2017 07:25 AM

    May 09, 2017

    Free Electrons

    Eight channels audio on i.MX7 with PCM3168

    Toradex Colibri i.MX7Free Electrons engineer Alexandre Belloni recently worked on a custom carrier board for a Colibri iMX7 system-on-module from Toradex. This system-on-module obviously uses the i.MX7 ARM processor from Freescale/NXP.

    While the module includes an SGTL5000 codec, one of the requirements for that project was to handle up to eight audio channels. The SGTL5000 uses I²S and handles only two channels.

    I2S

    I2S timing diagram from the SGTL5000 datasheet

    Thankfully, the i.MX7 has multiple audio interfaces and one is fully available on the SODIMM connector of the Colibri iMX7. A TI PCM3168 was chosen for the carrier board and is connected to the second Synchronous Audio Interface (SAI2) of the i.MX7. This codec can handle up to 8 output channels and 6 input channels. It can take multiple formats as its input but TDM takes the smaller number of signals (4 signals: bit clock, word clock, data input and data output).


    TDM timing diagram from the PCM3168 datasheet

    The current Linux long term support version is 4.9 and was chosen for this project. It has support for both the i.MX7 SAI (sound/soc/fsl/fsl_sai.c) and the PCM3168 (sound/soc/codecs/pcm3168a.c). That’s two of the three components that are needed, the last one being the driver linking both by describing the topology of the “sound card”. In order to keep the custom code to the minimum, there is an existing generic driver called simple-card (sound/soc/generic/simple-card.c). It is always worth trying to use it unless something really specific prevents that. Using it was as simple as writing the following DT node:

            board_sound {
                    compatible = "simple-audio-card";
                    simple-audio-card,name = "imx7-pcm3168";
                    simple-audio-card,widgets =
                            "Speaker", "Channel1out",
                            "Speaker", "Channel2out",
                            "Speaker", "Channel3out",
                            "Speaker", "Channel4out",
                            "Microphone", "Channel1in",
                            "Microphone", "Channel2in",
                            "Microphone", "Channel3in",
                            "Microphone", "Channel4in";
                    simple-audio-card,routing =
                            "Channel1out", "AOUT1L",
                            "Channel2out", "AOUT1R",
                            "Channel3out", "AOUT2L",
                            "Channel4out", "AOUT2R",
                            "Channel1in", "AIN1L",
                            "Channel2in", "AIN1R",
                            "Channel3in", "AIN2L",
                            "Channel4in", "AIN2R";
    
                    simple-audio-card,dai-link@0 {
                            format = "left_j";
                            bitclock-master = &pcm3168_dac>;
                            frame-master = &pcm3168_dac>;
                            frame-inversion;
    
                            cpu {
                                    sound-dai = &sai2>;
                                    dai-tdm-slot-num = 8>;
                                    dai-tdm-slot-width = 32>;
                            };
    
                            pcm3168_dac: codec {
                                    sound-dai = &pcm3168 0>;
                                    clocks = &codec_osc>;
                            };
                    };
    
                    simple-audio-card,dai-link@2 {
                            format = "left_j";
                            bitclock-master = &pcm3168_adc>;
                            frame-master = &pcm3168_adc>;
    
                            cpu {
                                    sound-dai = &sai2>;
                                    dai-tdm-slot-num = 8>;
                                    dai-tdm-slot-width = 32>;
                            };
    
                            pcm3168_adc: codec {
                                    sound-dai = &pcm3168 1>;
                                    clocks = &codec_osc>;
                            };
                    };
            };

    There are multiple things of interest:

    • Only 4 input channels and 4 output channels are routed because the carrier board only had that wired.
    • There are two DAI links because the pcm3168 driver exposes inputs and outputs separately
    • As per the PCM3168 datasheet:
      • left justified mode is used
      • dai-tdm-slot-num is set to 8 even though only 4 are actually used
      • dai-tdm-slot-width is set to 32 because the codec takes 24-bit samples but requires 32 clocks per sample (this is solved later in userspace)
      • The codec is master which is usually best regarding clock accuracy, especially since the various SoMs on the market almost never expose the audio clock on the carrier board interface. Here, a crystal was used to clock the PCM3168.

    The PCM3168 codec is added under the ecspi3 node as that is where it is connected:

    &ecspi3 {
            pcm3168: codec@0 {
                    compatible = "ti,pcm3168a";
                    reg = 0>;
                    spi-max-frequency = 1000000>;
                    clocks = &codec_osc>;
                    clock-names = "scki";
                    #sound-dai-cells = 1>;
                    VDD1-supply = &reg_module_3v3>;
                    VDD2-supply = &reg_module_3v3>;
                    VCCAD1-supply = &reg_board_5v0>;
                    VCCAD2-supply = &reg_board_5v0>;
                    VCCDA1-supply = &reg_board_5v0>;
                    VCCDA2-supply = &reg_board_5v0>;
            };
    };
    

    #sound-dai-cells is what allows to select between the input and output interfaces.

    On top of that, multiple issues had to be fixed:

    Finally, an ALSA configuration file (/usr/share/alsa/cards/imx7-pcm3168.conf) was written to ensure samples sent to the card are in the proper format, S32_LE. 24-bit samples will simply have zeroes in the least significant byte. For 32-bit samples, the codec will properly ignore the least significant byte.
    Also this describes that the first subdevice is the playback (output) device and the second subdevice is the capture (input) device.

    imx7-pcm3168.pcm.default {
    	@args [ CARD ]
    	@args.CARD {
    		type string
    	}
    	type asym
    	playback.pcm {
    		type plug
    		slave {
    			pcm {
    				type hw
    				card $CARD
    				device 0
    			}
    			format S32_LE
    			rate 48000
    			channels 4
    		}
    	}
    	capture.pcm {
    		type plug
    		slave {
    			pcm {
    				type hw
    				card $CARD
    				device 1
    			}
    			format S32_LE
    			rate 48000
    			channels 4
    		}
    	}
    }

    On top of that, the dmix and dsnoop ALSA plugins can be used to separate channels.

    To conclude, this shows that it is possible to easily leverage existing code to integrate an audio codec in a design by simply writing a device tree snippet and maybe an ALSA configuration file if necessary.

    by Alexandre Belloni at May 09, 2017 08:16 AM

    May 04, 2017

    Free Electrons

    Feedback from the Netdev 2.1 conference

    At Free Electrons, we regularly work on networking topics as part of our Linux kernel contributions and thus we decided to attend our very first Netdev conference this year in Montreal. With the recent evolution of the network subsystem and its drivers capabilities, the conference was a very good opportunity to stay up-to-date, thanks to lots of interesting sessions.

    Eric Dumazet presenting “Busypolling next generation”

    The speakers and the Netdev committee did an impressive job by offering such a great schedule and the recorded talks are already available on the Netdev Youtube channel. We particularly liked a few of those talks.

    Distributed Switch Architecture – slidesvideo

    Andrew Lunn, Viven Didelot and Florian Fainelli presented DSA, the Distributed Switch Architecture, by giving an overview of what DSA is and by then presenting its design. They completed their talk by discussing the future of this subsystem.

    DSA in one slide

    The goal of the DSA subsystem is to support Ethernet switches connected to the CPU through an Ethernet controller. The distributed part comes from the possibility to have multiple switches connected together through dedicated ports. DSA was introduced nearly 10 years ago but was mostly quiet and only recently came back to life thanks to contributions made by the authors of this talk, its maintainers.

    The main idea of DSA is to reuse the available internal representations and tools to describe and configure the switches. Ports are represented as Linux network interfaces to allow the userspace to configure them using common tools, the Linux bridging concept is used for interface bridging and the Linux bonding concept for port trunks. A switch handled by DSA is not seen as a special device with its own control interface but rather as an hardware accelerator for specific networking capabilities.

    DSA has its own data plane where the switch ports are slave interfaces and the Ethernet controller connected to the SoC a master one. Tagging protocols are used to direct the frames to a specific port when coming from the SoC, as well as when received by the switch. For example, the RX path has an extra check after netif_receive_skb() so that if DSA is used, the frame can be tagged and reinjected into the network stack RX flow.

    Finally, they talked about the relationship between DSA and Switchdev, and cross-chip configuration for interconnected switches. They also exposed the upcoming changes in DSA as well as long term goals.

    Memory bottlenecks – slides

    As part of the network performances workshop, Jesper Dangaard Brouer presented memory bottlenecks in the allocators caused by specific network workloads, and how to deal with them. The SLAB/SLUB baseline performances are found to be too slow, particularly when using XDP. A way from a driver to solve this issue is to implement a custom page recycling mechanism and that’s what all high-speed drivers do. He then displayed some data to show why this mechanism is needed when targeting the 10G network budget.

    Jesper is working on a generic solution called page pool and sent a first RFC at the end of 2016. As mentioned in the cover letter, it’s still not ready for inclusion and was only sent for early reviews. He also made a small overview of his implementation.

    DDOS countermeasures with XDP – slides #1slides #2 – video #1video #2

    These two talks were given by Gilberto Bertin from Cloudflare and Martin Lau from Facebook. While they were not talking about device driver implementation or improvements in the network stack directly related to what we do at Free Electrons, it was nice to see how XDP is used in production.

    XDP, the eXpress Data Path, provides a programmable data path at the lowest point of the network stack by processing RX packets directly out of the drivers’ RX ring queues. It’s quite new and is an answer to lots of userspace based solutions such as DPDK. Gilberto andMartin showed excellent results, confirming the usefulness of XDP.

    From a driver point of view, some changes are required to support it. RX hooks must be added as well as some API changes and the driver’s memory model often needs to be updated. So far, in v4.10, only a few drivers are supporting XDP.

    XDP MythBusters – slides – video

    David S. Miller, the maintainer of the Linux networking stack and drivers, did an interesting keynote about XDP and eBPF. The eXpress Data Path clearly was the hot topic of this Netdev 2.1 conference with lots of talks related to the concept and David did a good overview of what XDP is, its purposes, advantages and limitations. He also quickly covered eBPF, the extended Berkeley Packet Filters, which is used in XDP to filter packets.

    This presentation was a comprehensive introduction to the concepts introduced by XDP and its different use cases.

    Conclusion

    Netdev 2.1 was an excellent experience for us. The conference was well organized, the single track format allowed us to see every session on the schedule, and meeting with attendees and speakers was easy. The content was highly technical and an excellent opportunity to stay up-to-date with the latest changes of the networking subsystem in the kernel. The conference hosted both talks about in-kernel topics and their use in userspace, which we think is a very good approach to not focus only on the kernel side but also to be aware of the users needs and their use cases.

    by Antoine Ténart at May 04, 2017 08:13 AM

    May 02, 2017

    Harald Welte

    OsmoDevCon 2017 Review

    After the public user-oriented OsmoCon 2017, we also recently had the 6th incarnation of our annual contributors-only Osmocom Developer Conference: The OsmoDevCon 2017.

    This is a much smaller group, typically about 20 people, and is limited to actual developers who have a past record of contributing to any of the many Osmocom projects.

    We had a large number of presentation and discussions. In fact, so large that the schedule of talks extended from 10am to midnight on some days. While this is great, it also means that there was definitely too little time for more informal conversations, chatting or even actual work on code.

    We also have such a wide range of topics and scope inside Osmocom, that the traditional ad-hoch scheduling approach no longer seems to be working as it used to. Not everyone is interested in (or has time for) all the topics, so we should group them according to their topic/subject on a given day or half-day. This will enable people to attend only those days that are relevant to them, and spend the remaining day in an adjacent room hacking away on code.

    It's sad that we only have OsmoDevCon once per year. Maybe that's actually also something to think about. Rather than having 4 days once per year, maybe have two weekends per year.

    Always in motion the future is.

    by Harald Welte at May 02, 2017 10:00 PM

    Overhyped Docker

    Overhyped Docker missing the most basic features

    I've always been extremely skeptical of suddenly emerging over-hyped technologies, particularly if they advertise to solve problems by adding yet another layer to systems that are already sufficiently complex themselves.

    There are of course many issues with containers, ranging from replicated system libraries and the basic underlying statement that you're giving up on the system packet manager to properly deal with dependencies.

    I'm also highly skeptical of FOSS projects that are primarily driven by one (VC funded?) company. Especially if their offering includes a so-called cloud service which they can stop to operate at any given point in time, or (more realistically) first get everybody to use and then start charging for.

    But well, despite all the bad things I read about it over the years, on one day in May 2017 I finally thought let's give it a try. My problem to solve as a test balloon is fairly simple.

    My basic use case

    The plan is to start OsmoSTP, the m3ua-testtool and the sua-testtool, which both connect to OsmoSTP. By running this setup inside containers and inside an internal network, we could then execute the entire testsuite e.g. during jenkins test without having IP address or port number conflicts. It could even run multiple times in parallel on one buildhost, verifying different patches as part of the continuous integration setup.

    This application is not so complex. All it needs is three containers, an internal network and some connections in between. Should be a piece of cake, right?

    But enter the world of buzzword-fueled web-4000.0 software-defined virtualised and orchestrated container NFW + SDN vodoo: It turns out to be impossible, at least not with the preferred tools they advertise.

    Dockerfiles

    The part that worked relatively easily was writing a few Dockerfiles to build the actual containers. All based on debian:jessie from the library.

    As m3ua-testsuite is written in guile, and needs to build some guile plugin/extension, I had to actually include guile-2.0-dev and other packages in the container, making it a bit bloated.

    I couldn't immediately find a nice example Dockerfile recipe that would allow me to build stuff from source outside of the container, and then install the resulting binaries into the container. This seems to be a somewhat weak spot, where more support/infrastructure would be helpful. I guess the idea is that you simply install applications via package feeds and apt-get. But I digress.

    So after some tinkering, I ended up with three docker containers:

    • one running OsmoSTP
    • one running m3ua-testtool
    • one running sua-testtool

    I also managed to create an internal bridged network between the containers, so the containers could talk to one another.

    However, I have to manually start each of the containers with ugly long command line arguments, such as docker run --network sigtran --ip 172.18.0.200 -it osmo-stp-master. This is of course sub-optimal, and what Docker Services + Stacks should resolve.

    Services + Stacks

    The idea seems good: A service defines how a given container is run, and a stack defines multiple containers and their relation to each other. So it should be simple to define a stack with three services, right?

    Well, it turns out that it is not. Docker documents that you can configure a static ipv4_address [1] for each service/container, but it seems related configuration statements are simply silently ignored/discarded [2], [3], [4].

    This seems to be related that for some strange reason stacks can (at least in later versions of docker) only use overlay type networks, rather than the much simpler bridge networks. And while bridge networks appear to support static IP address allocations, overlay apparently doesn't.

    I still have a hard time grasping that something that considers itself a serious product for production use (by a company with estimated value over a billion USD, not by a few hobbyists) that has no support for running containers on static IP addresses. that. How many applications out there have I seen that require static IP address configuration? How much simpler do setups get, if you don't have to rely on things like dynamic DNS updates (or DNS availability at all)?

    So I'm stuck with having to manually configure the network between my containers, and manually starting them by clumsy shell scripts, rather than having a proper abstraction for all of that. Well done :/

    Exposing Ports

    Unrelated to all of the above: If you run some software inside containers, you will pretty soon want to expose some network services from containers. This should also be the most basic task on the planet.

    However, it seems that the creators of docker live in the early 1980ies, where only TCP and UDP transport protocols existed. They seem to have missed that by the late 1990ies to early 2000s, protocols like SCTP or DCCP were invented.

    But yet, in 2017, Docker chooses to

    Now some of the readers may think 'who uses SCTP anyway'. I will give you a straight answer: Everyone who has a mobile phone uses SCTP. This is due to the fact that pretty much all the connections inside cellular networks (at least for 3G/4G networks, and in reality also for many 2G networks) are using SCTP as underlying transport protocol, from the radio access network into the core network. So every time you switch your phone on, or do anything with it, you are using SCTP. Not on your phone itself, but by all the systems that form the network that you're using. And with the drive to C-RAN, NFV, SDN and all the other buzzwords also appearing in the Cellular Telecom field, people should actually worry about it, if they want to be a part of the software stack that is used in future cellular telecom systems.

    Summary

    After spending the better part of a day to do something that seemed like the most basic use case for running three networked containers using Docker, I'm back to step one: Most likely inventing some custom scripts based on unshare to run my three test programs in a separate network namespace for isolated execution of test suite execution as part of a Jenkins CI setup :/

    It's also clear that Docker apparently don't care much about playing a role in the Cellular Telecom world, which is increasingly moving away from proprietary and hardware-based systems (like STPs) to virtualised, software-based systems.

    [1]https://docs.docker.com/compose/compose-file/#ipv4address-ipv6address
    [2]https://forums.docker.com/t/docker-swarm-1-13-static-ips-for-containers/28060
    [3]https://github.com/moby/moby/issues/31860
    [4]https://github.com/moby/moby/issues/24170

    by Harald Welte at May 02, 2017 10:00 PM

    Free Electrons

    Linux 4.11, Free Electrons contributions

    Linus Torvalds has released this Sunday Linux 4.11. For an overview of the new features provided by this new release, one can read the coverage from LWN: part 1, part 2 and part 3. The KernelNewbies site also has a detailed summary of the new features.

    With 137 patches contributed, Free Electrons is the 18th contributing company according to the Kernel Patch Statistics. Free Electrons engineer Maxime Ripard appears in the list of top contributors by changed lines in the LWN statistics.

    Our most important contributions to this release have been:

    • Support for Atmel platforms
      • Alexandre Belloni improved suspend/resume support for the Atmel watchdog driver, I2C controller driver and UART controller driver. This is part of a larger effort to upstream support for the backup mode of the Atmel SAMA5D2 SoC.
      • Alexandre Belloni also improved the at91-poweroff driver to properly shutdown LPDDR memories.
      • Boris Brezillon contributed a fix for the Atmel HLCDC display controller driver, as well as fixes for the atmel-ebi driver.
    • Support for Allwinner platforms
      • Boris Brezillon contributed a number of improvements to the sunxi-nand driver.
      • Mylène Josserand contributed a new driver for the digital audio codec on the Allwinner sun8i SoC, as well a the corresponding Device Tree changes and related fixes. Thanks to this driver, Mylène enabled audio support on the R16 Parrot and A33 Sinlinx boards.
      • Maxime Ripard contributed numerous improvements to the sunxi-mmc MMC controller driver, to support higher data rates, especially for the Allwinner A64.
      • Maxime Ripard contributed official Device Tree bindings for the ARM Mali GPU, which allows the GPU to be described in the Device Tree of the upstream kernel, even if the ARM kernel driver for the Mali will never be merged upstream.
      • Maxime Ripard contributed a number of fixes for the rtc-sun6i driver.
      • Maxime Ripard enabled display support on the A33 Sinlinx board, by contributing a panel driver and the necessary Device Tree changes.
      • Maxime Ripard continued his clean-up effort, by converting the GR8 and sun5i clock drivers to the sunxi-ng clock infrastructure, and converting the sun5i pinctrl driver to the new model.
      • Quentin Schulz added a power supply driver for the AXP20X and AXP22X PMICs used on numerous Allwinner platforms, as well as numerous Device Tree changes to enable it on the R16 Parrot and A33 Sinlinx boards.
    • Support for Marvell platforms
      • Grégory Clement added support for the RTC found in the Marvell Armada 7K and 8K SoCs.
      • Grégory Clement added support for the Marvell 88E6141 and 88E6341 Ethernet switches, which are used in the Armada 3700 based EspressoBin development board.
      • Romain Perier enabled the I2C controller, SPI controller and Ethernet switch on the EspressoBin, by contributing Device Tree changes.
      • Thomas Petazzoni contributed a number of fixes to the OMAP hwrng driver, which turns out to also be used on the Marvell 7K/8K platforms for their HW random number generator.
      • Thomas Petazzoni contributed a number of patches for the mvpp2 Ethernet controller driver, preparing the future addition of PPv2.2 support to the driver. The mvpp2 driver currently only supports PPv2.1, the Ethernet controller used on the Marvell Armada 375, and we are working on extending it to support PPv2.2, the Ethernet controller used on the Marvell Armada 7K/8K. PPv2.2 support is scheduled to be merged in 4.12.
    • Support for RaspberryPi platforms
      • Boris Brezillon contributed Device Tree changes to enable the VEC (Video Encoder) on all bcm283x platforms. Boris had previously contributed the driver for the VEC.

    In addition to our direct contributions, a number of Free Electrons engineers are also maintainers of various subsystems in the Linux kernel. As part of this maintenance role:

    • Maxime Ripard, co-maintainer of the Allwinner ARM platform, reviewed and merged 85 patches from contributors
    • Alexandre Belloni, maintainer of the RTC subsystem and co-maintainer of the Atmel ARM platform, reviewed and merged 60 patches from contributors
    • Grégory Clement, co-maintainer of the Marvell ARM platform, reviewed and merged 42 patches from contributors
    • Boris Brezillon, maintainer of the MTD NAND subsystem, reviewed and merged 8 patches from contributors

    Here is the detailed list of contributions, commit per commit:

    by Thomas Petazzoni at May 02, 2017 12:23 PM

    May 01, 2017

    Harald Welte

    Book on Practical GPL Compliance

    My former gpl-violations.org colleague Armijn Hemel and Shane Coughlan (former coordinator of the FSFE Legal Network) have written a book on practical GPL compliance issues.

    I've read through it (in the bath tub of course, what better place to read technical literature), and I can agree wholeheartedly with its contents. For those who have been involved in GPL compliance engineering there shouldn't be much new - but for the vast majority of developers out there who have had little exposure to the bread-and-butter work of providing complete an corresponding source code, it makes an excellent introductory text.

    The book focuses on compliance with GPLv2, which is probably not too surprising given that it's published by the Linux foundation, and Linux being GPLv2.

    You can download an electronic copy of the book from https://www.linuxfoundation.org/news-media/research/practical-gpl-compliance

    Given the subject matter is Free Software, and the book is written by long-time community members, I cannot help to notice a bit of a surprise about the fact that the book is released in classic copyright under All rights reserved with no freedom to the user.

    Considering the sensitive legal topics touched, I can understand the possible motivation by the authors to not permit derivative works. But then, there still are licenses such as CC-BY-ND which prevent derivative works but still permit users to make and distribute copies of the work itself. I've made that recommendation / request to Shane, let's see if they can arrange for some more freedom for their readers.

    by Harald Welte at May 01, 2017 10:00 PM

    April 30, 2017

    Harald Welte

    OsmoCon 2017 Review

    It's already one week past the event, so I really have to sit down and write some rewview on the first public Osmocom Conference ever: OsmoCon 2017.

    The event was a huge success, by all accounts.

    • We've not only been sold out, but we also had to turn down some last minute registrations due to the venue being beyond capacity (60 seats). People traveled from Japan, India, the US, Mexico and many other places to attend.
    • We've had an amazing audience ranging from commercial operators to community cellular operators to professional developers doing work relate to osmocom, academia, IT security crowds and last but not least enthusiasts/hobbyists, with whom the project[s] started.
    • I've received exclusively positive feedback from many attendees
    • We've had a great programme. Some part of it was of introductory nature and probably not too interesting if you've been in Osmocom for a few years. However, the work on 3G as well as the current roadmap was probably not as widely known yet. Also, I really loved to see Roch's talk about Running a commercial cellular network with Osmocom software as well as the talk on Facebook's OpenCellular BTS hardware and the Community Cellular Manager.
    • We have very professional live streaming + video recordings courtesy of the C3VOC team. Thanks a lot for your support and for having the video recordings of all talks online already at the next day after the event.

    We also received some requests for improvements, many of which we will hopefully consider before the next Osmocom Conference:

    • have a multiple day event. Particularly if you're traveling long-distance, it is a lot of overhead for a single-day event. We of course fully understand that. On the other hand, it was the first Osmocom Conference, and hence it was a test balloon where it was initially unclear if we'll be able to get a reasonable number of attendees interested at all, or not. And organizing an event with venue and talks for multiple days if in the end only 10 people attend would have been a lot of effort and financial risk. But now that we know there are interested folks, we can definitely think of a multiple day event next time
    • Signs indicating venue details on the last meters. I agree, this cold have been better. The address of the venue was published, but we could have had some signs/posters at the door pointing you to the right meeting room inside the venue. Sorry for that.
    • Better internet connectivity. This is a double-edged sword. Of course we want our audience to be primarily focused on the talks and not distracted :P I would hope that most people are able to survive a one day event without good connectivity, but for sure we will have to improve in case of a multiple-day event in the future

    In terms of my requests to the attendees, I only have one

    • Participate in the discussions on the schedule/programme while it is still possible to influence it. When we started to put together the programme, I posted about it on the openbsc mailing list and invited feedback. Still, most people seem to have missed the time window during which talks could have been submitted and the schedule still influenced before finalizing it
    • Register in time. We have had almost no registrations until about two weeks ahead of the event (and I was considering to cancel it), and then suddenly were sold out in the week ahead of the event. We've had people who first booked their tickets, only to learn that the tickets were sold out. I guess we will introduce early bird pricing and add a very expensive last minute ticket option next year in order to increase motivation to register early and thus give us flexibility regarding venue planning.

    Thanks again to everyone involved in OsmoCon 2017!

    Ok, now, all of you who missed the event: Go to https://media.ccc.de/c/osmocon17 and check out the recordings. Have fun!

    by Harald Welte at April 30, 2017 10:00 PM

    April 28, 2017

    Andrew Zonenberg, Silicon Exposed

    Quest for camp stove fuel

    For those of you who aren't keeping up with my occasional Twitter/Facebook posts on the subject, I volunteer with a local search and rescue unit. This means that a few times a month I have to grab my gear and run out into the woods on zero notice to find an injured hiker, locate an elderly person with Alzheimer's, or whatever the emergency du jour is.

    Since I don't have time to grab fresh food on my way out the door when duty calls, I keep my pack and load-bearing vest stocked with shelf-stable foods like energy bars and surplus military rations. Many missions are short and intense, leaving me no time to eat anything but finger-food items (Clif bars and First Strike Ration sandwiches are my favorites) kept in a vest pocket.

    My SAR vest. Weighs about 17 pounds / 7.7 kg once the Camelbak bladder is added.
    On the other hand, during longer missions there may be opportunities to make hot food while waiting for a medevac helicopter, ground team with stretcher, etc - and of course there's plenty of time to cook a hot dinner during training weekends. Besides being a convenience, hot food and drink helps us (and the subject) avoid hypothermia so it can be a literal life-saver.

    I've been using MRE chemical heaters for this, because they're small, lightweight (20 g / 0.7 oz each), and not too pricey (about $1 each from surplus dealers). Their major flaw is that they don't get all that hot, so during cold weather it's hard to get your food more than lukewarm.

    I've used many kinds of camp stoves (propane and white gas primarily) over the course of my camping, but didn't own one small enough to use for SAR. My full 48-hour gear loadout (including water) weighs around 45 pounds / 20 kg, and I really didn't want to add much more to this. The MSR Whisperlite, for example, weighs in at 430 g / 15.2 oz for the stove, fuel pump, and wind shield. Add to this 150 g / 5.25 oz for the fuel bottle, a pot to cook in, and the fuel itself and you're looking at close to 1 kg / 2 pounds all told.

    I have an aluminum camp frying pan that, including lid, weighs 121 g / 4.3 oz. It seemed hard to get much lighter for something large enough that you could squeeze an MRE entree into, so I kept it.

    After a bit of browsing in the local Wal-Mart, I found a tiny sheet metal folding stove that weighed 112 g / 3.98 oz empty. It's designed to burn pellets of hexamine fuel.

    The stove. Ignore the aluminum foil, it was there from a previous experiment.
    In my testing it worked pretty well. One pellet brought 250 ml of water from 10C to boiling in six minutes, and held it at a boil for a minute before burning out. The fuel burned fairly cleanly and didn't leave that much soot on the pot either, which was nice.

    What's not so nice, however, was the fuel. According to the MSDS, hexamine decomposes upon heating or contact with skin into formaldehyde, which is toxic and carcinogenic. Combustion products include such tasty substances as hydrogen cyanide and ammonia. This really didn't seem like something that I wanted to handle, or burn, in close proximity to food! Thus began my quest for a safer alternative.

    My first thought was to use tea light candles, since I already had a case of a hundred for use as fire starters. In my testing, one tea light was able to heat a pot of water from 10C to 30C in a whopping 21 minutes before starting to reach an equilibrium where the pot lost heat as fast as it gained it. I continued the test out to 34 minutes, at which point it was a toasty 36C.

    The stove was big enough to fit more than one tea light, so the obvious next step was to put six of them in a 3x2 grid. This heated significantly more, at the 36-minute mark my water measured a respectable 78C.

    I figured I was on the right track, but needed to burn more wax per unit time. Some rough calculations suggested that a brick of paraffin wax the size of the stove and about as thick as a tea light contained 1.5 kWh of energy, and would output about 35 W of heat per wick. Assuming 25% energy transfer efficiency, which seemed reasonable based on the temperature data I had measured earlier, I needed to put out around 675 W to bring my pot to a boil in ten minutes. This came out to approximately 20 candle wicks.

    I started out by folding a tray out of heavy duty aluminum foil, and reinforcing it on the outside with aluminum foil duct tape. I then bought a pack of tea light wicks on Amazon and attached them to the tray with double-sided tape.
    Giant 20-wicked candle before adding wax
    I made a water bath on my hot plate and melted a bunch of tea lights in a beaker. I wasn't in the mood to get spattered with hot wax so I wore long-sleeved clothes and a face shield. I was pretty sure that the water bath wouldn't get anywhere near the ignition point of the wax but did the work outside on a concrete patio and had a CO2 fire extinguisher on standby just in case.

    Melting wax. Safety first, everyone!
    The resulting behemoth of a candle actually looked pretty nice!
    20-wick, 700W thermal output candle with tea lights for scale
    After I was done and the wax had solidified I put the candle in my stove and lit it off. It took a while to get started (a light breeze kept blowing out one wick or another and I used quite a few matches to get them all lit), but after a while I had a solid flame going. At the six-minute mark my water had reached 37C.

    A few minutes later, disaster struck! The pool of molten wax reached the flash point and ignited across the whole surface. At this point I had a massive flame - my pot went from 48 to 82C in two minutes! This translates to 2.6 kW assuming 100% energy transfer efficiency, so actual power output was probably upwards of 5 kW.

    I removed the pot (using welding gloves since the flames were licking up the handle) and grabbed a photo of the fireball before thinking about how to extinguish the fire.

    Pretty sure this isn't what a stove is supposed to look like
    Since I was outside on a non-flammable surface the fire wasn't an immediate safety hazard, but I wanted to put it out non-destructively to preserve evidence for failure analysis. I opted to smother it with a giant candle snuffer that I rapidly folded out of heavy-duty aluminum foil.

    The carnage after the fire was extinguished. Note the discolored wax!
    It took me a while to clean up the mess - the giant candle had turned tan from incomplete combustion. It had also sprung a leak at some point, spilling a bit of wax out onto my patio.

    On top of that, my pot was coal-black from all of the soot the super-rich flame was putting out. My wife wouldn't let it anywhere near the sink so I scrubbed it as best I could in the bathtub, then spent probably 20 minutes scrubbing all of the gray stains off the tub itself.

    In order to avoid the time-consuming casting of wax, my next test used a slug of wax from a tea light that I drilled holes in, then inserted four wicks. I covered the top of the candle with aluminum foil tape to reflect heat back up at the pot, in a bid to increase efficiency and keep the melt puddle below the flash point.

    Quad-wick tea light
    This performed pretty well in my test. It got my pot up to 35C at the 12-minute mark, which was right about where I expected based on the x1 and x6 candle tests, and didn't flash over.

    The obvious next step was to make five of them and see if this would work any better. It ignited more easily than the "brick" candle, and reached 83C at the 6-minute mark. Before T+ 7 minutes, however, the glue on the tape had failed from the heat, and the wax flashed. By the time I got the pot out of harm's way the water was boiling and it was covered in soot (again).

    This time, it was a little bit breezier and my snuffer failed to exclude enough air to extinguish the flames. I ended up having to blast it with the CO2 extinguisher I had ready for just this situation. It wasn't hard to put out and I only used about two of the ten pounds of gas. (Ironically, I had planned to take the extinguisher in to get serviced the next morning because it was almost due for annual preventive maintenance. I ended up needing a recharge too...)

    After cleaning off my pot and stove, and scraping some of the spilled wax off my driveway, it was back to the drawing board. I thought about other potential fuels I had lying around, and several obvious options came to mind.

    Testing booze for flammability
    I'm not a big drinker but houseguests have resulted in me having a few bottles of liquor around so I tested it out. Jack didn't burn at all, Captain Morgan white rum burned fitfully and left a sugary residue without putting out much heat. 100-proof vodka left a bit of starchy residue and was tricky to light.

    A tea light cup full of 99% isopropyl alcohol brought my pot to 75C in five minutes before burning out, but was filthy and left soot everywhere. Hand sanitizer (about 60% ethanol) burned cleanly, but slower and cooler due to the water content - peak temperature of 54C and 12 minute burn time.

    Ethanol seemed like a viable fuel if I could get it up to a higher concentration. I wanted to avoid liquid fuels due to difficulty of handling and the risk of spills, but a thick gel that didn't spill easily looked like a good option.

    After a bit of research I discovered that calcium acetate (a salt of acetic acid) was very soluble in water, but not in alcohols. When a saturated solution of it in water is added to an alcohol it forms a stiff gel, commonly referred to as a "California snowball" because it burns and has a consistency like wet snow. I don't have any photos of my test handy, but here's a video from somebody else that shows it off nicely.



    Two tea light cups full of the stuff brought my pot of water to a boil in 8 minutes, and held it there until burning out just before the 13-minute mark. I also tried boiling a FSR sandwich packet in a half-inch or so of water, and it was deliciously warm by the end. This seemed like a pretty good fuel!


    Testing the calcium acetate fuel. I put a lid on the pot after taking this pic.

    I filled two film-canister type containers with the calcium acetate + ethanol gel fuel and left it in my SAR pack. As luck would have it, I spent the next day looking for a missing hiker so it spent quite a while bouncing around driving on dirt roads and hiking.

    When I got home I was disappointed to see clear liquid inside the bag that my stove and fuel were stored in. I opened the canisters only to find a thin whitish liquid instead of a stiff gel.

    It seemed that the calcium acetate gel was not very stable, and over time the calcium acetate particles would precipitate out and the solution would revert to a liquid state. This clearly would not do.

    Hand sanitizer seemed like a pretty good fuel other than being underpowered and perfumed, so I went to the grocery store and started looking at ingredient lists. They all seemed pretty similar - ethanol, water, aloe and other moisturizers, perfumes, maybe colorants, and a thickener. The thickener was typically either hydroxyethyl cellulose or a carbomer.

    A few minutes on Amazon turned up a bag of Carbomer 940, a polyvinyl carboxy polymer cross-linked with esters of pentaerythritol. It's supposed to produce a viscosity of 45,000 to 70,000 CPS when added to water at 0.5% by weight. I also ordered a second bottle of Reagent Alcohol (90% ethanol / 5% methanol / 5% isopropanol with no bittering agents, ketones, or non-volatile ingredients) since my other one was pretty low after the calcium acetate failure.

    Carbomer 940 is fairly acidic (pH 2.7 - 3.3 at 0.5% concentration) in its pure form and gel when neutral or alkaline, so it needs to be neutralized. The recommended base for alcohol-based gels was triethanolamine, so I picked up a bottle of that too.


    Preparing to make carbomer-alcohol fuel gel

    I made a 50% alcohol-water solution and added an 0.5% mass of carbomer. It didn't seem to fully dissolve, leaving a bunch of goopy chunks in the beaker.


    Incompletely dissolved Carbomer 940 in 50/50 water/alcohol
    I left it overnight to dissolve, blended it more, and then filtered off any big clumps with a coffee filter. I then added a few drops of triethanolamine, at which point the solution immediately turned cloudy. Upon blending, a rubbery white substance preciptated out of solution and stuck to my stick blender and the sidewalls of the beaker. This was not supposed to happen!


    Rubbery goop on the blender head
    Precipitate at the bottom of the beaker

    I tried everything I could think of - diluting the triethanolamine and adding it slowly to reduce sudden pH changes, lowering the alcohol concentration, and even letting the carbomer sit in solution for a few days before adding the triethanolamine. Nothing worked.

    I went back to square one and started reading more papers and watching process demonstration videos from the manufacturer. Eventually I noticed one source that suggested increasing the pH of the water to about 8 *before* adding the carbomer. This worked and gave a beautiful clear gel!

    After a bit of tinkering I found a good process: Starting with 100 ml of water, titrate to pH 8 with triethanolamine. Add 1 g of carbomer powder and blend until fully gelled. Add 300 ml of reagent alcohol a bit at a time, mixing thoroughly after each addition. About halfway through adding the alcohol the gel started to get pretty runny so I mixed in a few more drops of triethanolamine and another 500 mg of carbomer powder before mixing in the rest of the alcohol. I had only a little more alcohol left in the bottle (maybe 50 ml) so I stirred that in without bothering to measure.

    The resulting gel was quite stiff and held its shape for a little while after pouring, but could still be transferred between containers without muich difficulty.


    Tea light can full of my final fuel
    I left the beaker of fuel in my garage for several days and shook it around a bit, but saw no evidence of degradation. Since it's basically just turbo-strength hand sanitizer (~78% instead of the usual 30-60%) without all of the perfumes and moisturizers, it should be pretty stable. I had no trouble igniting it down to 10C ambient temperatures, but may find it necessary to mix in some acetone or other low-flash-point fuel to light it reliably in the winter.

    The final batch of fuel filled two polypropylene specimen jars perfectly with just a little bit left over for a cooking test.


    One of my two fuel jars
    One tea light canister held 10.7 g / 0.38 oz of fuel, and I typically use two at a time, so 21.4 / 0.76 oz. One jar thus holds enough fuel for about five cook sessions, which is more than I'd ever need for a SAR mission or weekend camping trip. The final weight of my entire cooking system (stove, one fuel jar, tea light cans, and pot) comes out to 408 g / 14.41 oz, or a bit less than an empty Whisperlite stove (not counting the pot, fuel tank, or fuel)!

    The only thing left was to try cooking on it. I squeezed a bacon-cheddar FSR sandwich into my pot, added a bit of water, and put it on top of the stove with two candle cups of fuel.


    Nice clean blue flame, barely visible
    By the six-minute mark the water was boiling away merrily and a cloud of steam was coming up around the edge of the lid. I took the pot off around 8 minutes and removed my snack.

    Munching on my sandwich. You can't tell in this lighting, but the stove is still burning.
    For those of you who haven't eaten First Strike Rations, the sandwiches in them are kind of like Hot Pockets or Toaster Strudels, except with a very thick and dense bread rather than a fluffy, flaky one. The fats in the bread are solid at room temperature and liquefy once it gets warm. This significantly softens the texture of the bread and makes it taste a lot better, so reaching this point is generally the primary goal when cooking one.

    My sandwich was firmly over that line and tasted very good (for Army food baked two years ago). The bacon could have been a bit warmer, but the stove kept on burning until a bit after the ten-minute mark so I could easily have left it in the boiling water for another two minutes and made it even hotter.

    Once I was done eating it was time to clean up. The stove had no visible dirt (beyond what was there from my previous experiments), and the tea light canisters were clean and fairly free of soot except in one or two spots around the edges. Almost no goopy residue was left behind.

    Stove after the cook test
    The pot was quite clean as well, with no black soot and only a very thin film of discoloration that was thin enough to leave colored interference fringes. Some of this was left over from previous testing, so if this test had been run on a virgin pot there'd be even less residue.

    Bottom of the pot after the cook test


    Overall, it was a long journey with many false steps, but I now have the ability to cook for myself over a weekend trip in less than a pound of weight, so I'm pretty happy. 

    EDIT: A few people have asked to see the raw data from my temperature-vs-time cook tests, so here it is.

    Raw data (graph 1)
     
    Raw data (graph 2)

    by Andrew Zonenberg (noreply@blogger.com) at April 28, 2017 08:37 PM

    April 26, 2017

    Bunnie Studios

    Name that Ware, April 2017

    The Ware for April 2017 is shown below.

    This is a guest ware, but the contributor shall remain anonymous per request. Thank you for the contribution, you know who you are!

    by bunnie at April 26, 2017 05:46 PM

    Winner, Name that Ware March 2017

    The ware for March 2017 seems to be a Schneider ATV61 industrial variable speed drive controller. As rasz_pl pointed out, I left the sticker unredacted. I had a misgiving about hiding it fearing the ware would be unguessable, but leaving it in made it perhaps a bit too easy. Prize goes to rasz_pl for being the first to guess, email me for your prize!

    by bunnie at April 26, 2017 05:46 PM

    April 25, 2017

    Altus Metrum

    TeleMini3

    TeleMini V3.0 Dual-deploy altimeter with telemetry now available

    TeleMini v3.0 is an update to our original TeleMini v1.0 flight computer. It is a miniature (1/2 inch by 1.7 inch) dual-deploy flight computer with data logging and radio telemetry. Small enough to fit comfortably in an 18mm tube, this powerful package does everything you need on a single board:

    • 512kB on-board data logging memory, compared with 5kB in v1.

    • 40mW, 70cm ham-band digital transceiver for in-flight telemetry and on-the-ground configuration, compared to 10mW in v1.

    • Transmitted telemetry includes altitude, speed, acceleration, flight state, igniter continutity, temperature and battery voltage. Monitor the state of the rocket before, during and after flight.

    • Radio direction finding beacon transmitted during and after flight. This beacon can be received with a regular 70cm Amateur radio receiver.

    • Barometer accurate to 100k' MSL. Reliable apogee detection, independent of flight path. Barometric data recorded on-board during flight. The v1 boards could only fly to 45k'.

    • Dual-deploy with adjustable apogee delay and main altitude. Fires standard e-matches and Q2G2 igniters.

    • 0.5” x 1.7”. Fits easily in an 18mm tube. This is slightly longer than the v1 boards to provide room for two extra mounting holes past the pyro screw terminals.

    • Uses rechargeable Lithium Polymer battery technology. All-day power in a small and light-weight package.

    • Learn more at http://www.altusmetrum.org/TeleMini/

    • Purchase these at http://shop.gag.com/home-page/telemini-v3.html

    I don't have anything in these images to show just how tiny this board is—but the spacing between the screw terminals is 2.54mm (0.1in), and the whole board is only 13mm wide (1/2in).

    This was a fun board to design. As you might guess from the version number, we made a couple prototypes of a version 2 using the same CC1111 SoC/radio part as version 1 but in the EasyMini form factor (0.8 by 1.5 inches). Feed back from existing users indicated that bigger wasn't better in this case, so we shelved that design.

    With the availability of the STM32F042 ARM Cortex-M0 part in a 4mm square package, I was able to pack that, the higher power CC1200 radio part, a 512kB memory part and a beeper into the same space as the original TeleMini version 1 board. There is USB on the board, but it's only on some tiny holes, along with the cortex SWD debugging connection. I may make some kind of jig to gain access to that for configuration, data download and reprogramming.

    For those interested in an even smaller option, you could remove the screw terminals and battery connector and directly wire to the board, and replace the beeper with a shorter version. You could even cut the rear mounting holes off to make the board shorter; there are no components in that part of the board.

    by keithp's rocket blog at April 25, 2017 04:01 PM

    April 18, 2017

    Open Hardware Repository

    White Rabbit - 18-04-17: clean that fibre and SFP!

    The White Rabbit team at CERN organised a short course about fibre-optic cleaning and inspection.

    A special fibre inspection microscope that automatically analyses the image to decide if a cable or SFP passes or fails the norms was demonstrated.
    The images of some of the often-used cables and SFP modules that we picked from the development lab, showed clearly traces of grease and dust.

    The course showed undoubtedly that fibres should always be inspected and that in almost all cases they should be cleaned before plugging in.
    One should not forget to inspect and clean the SFP side either!

    The slides of this Fibre Cleaning and Inspection course are available via the OHWR Electronics Design project.

    Thanks to Amin Shoaie from CERN's EN-EL group for making this course available.
    Note that this course and the practical exercises will be repeated at CERN in the last week of April. Please contact us if you are interested.

    Click on image to see the course (pdf, 711kB)

    by Erik van der Bij (Erik.van.der.Bij@cern.ch) at April 18, 2017 05:49 PM

    April 16, 2017

    Harald Welte

    Things you find when using SCTP on Linux

    Observations on SCTP and Linux

    When I was still doing Linux kernel work with netfilter/iptables in the early 2000's, I was somebody who actually regularly had a look at the new RFCs that came out. So I saw the SCTP RFCs, SIGTRAN RFCs, SIP and RTP, etc. all released during those years. I was quite happy to see that for new protocols like SCTP and later DCCP, Linux quickly received a mainline implementation.

    Now most people won't have used SCTP so far, but it is a protocol used as transport layer in a lot of telecom protocols for more than a decade now. Virtually all protocols that have traditionally been spoken over time-division multiplex E1/T1 links have been migrated over to SCTP based protocol stackings.

    Working on various Open Source telecom related projects, i of course come into contact with SCTP every so often. Particularly some years back when implementing the Erlang SIGTAN code in erlang/osmo_ss7 and most recently now with the introduction of libosmo-sigtran with its OsmoSTP, both part of the libosmo-sccp repository.

    I've also hard to work with various proprietary telecom equipment over the years. Whether that's some eNodeB hardware from a large brand telecom supplier, or whether it's a MSC of some other vendor. And they all had one thing in common: Nobody seemed to use the Linux kernel SCTP code. They all used proprietary implementations in userspace, using RAW sockets on the kernel interface.

    I always found this quite odd, knowing that this is the route that you have to take on proprietary OSs without native SCTP support, such as Windows. But on Linux? Why? Based on rumors, people find the Linux SCTP implementation not mature enough, but hard evidence is hard to come by.

    As much as it pains me to say this, the kind of Linux SCTP bugs I have seen within the scope of our work on Osmocom seem to hint that there is at least some truth to this (see e.g. https://bugzilla.redhat.com/show_bug.cgi?id=1308360 or https://bugzilla.redhat.com/show_bug.cgi?id=1308362).

    Sure, software always has bugs and will have bugs. But we at Osmocom are 10-15 years "late" with our implementations of higher-layer protocols compared to what the mainstream telecom industry does. So if we find something, and we find it even already during R&D of some userspace code, not even under load or in production, then that seems a bit unsettling.

    One would have expected, with all their market power and plenty of Linux-based devices in the telecom sphere, why did none of those large telecom suppliers invest in improving the mainline Linux SCTP code? I mean, they all use UDP and TCP of the kernel, so it works for most of the other network protocols in the kernel, but why not for SCTP? I guess it comes back to the fundamental lack of understanding how open source development works. That it is something that the given industry/user base must invest in jointly.

    The leatest discovered bug

    During the last months, I have been implementing SCCP, SUA, M3UA and OsmoSTP (A Signal Transfer Point). They were required for an effort to add 3GPP compliant A-over-IP to OsmoBSC and OsmoMSC.

    For quite some time I was seeing some erratic behavior when at some point the STP would not receive/process a given message sent by one of the clients (ASPs) connected. I tried to ignore the problem initially until the code matured more and more, but the problems remained.

    It became even more obvious when using Michael Tuexen's m3ua-testtool, where sometimes even the most basic test cases consisting of sending + receiving a single pair of messages like ASPUP -> ASPUP_ACK was failing. And when the test case was re-tried, the problem often disappeared.

    Also, whenever I tried to observe what was happening by meas of strace, the problem would disappear completely and never re-appear until strace was detached.

    Of course, given that I've written several thousands of lines of new code, it was clear to me that the bug must be in my code. Yesterday I was finally prepare to accept that it might actually be a Linux SCTP bug. Not being able to reproduce that problem on a FreeBSD VM also pointed clearly into this direction.

    Now I could simply have collected some information and filed a bug report (which some kernel hackers at RedHat have thankfully invited me to do!), but I thought my use case was too complex. You would have to compile a dozen of different Osmocom libraries, configure the STP, run the scheme-language m3ua-testtool in guile, etc. - I guess nobody would have bothered to go that far.

    So today I tried to implement a test case that reproduced the problem in plain C, without any external dependencies. And for many hours, I couldn't make the bug to show up. I tried to be as close as possible to what was happening in OsmoSTP: I used non-blocking mode on client and server, used the SCTP_NODELAY socket option, used the sctp_rcvmsg() library wrapper to receive events, but the bug was not reproducible.

    Some hours later, it became clear that there was one setsockopt() in OsmoSTP (actually, libosmo-netif) which enabled all existing SCTP events. I did this at the time to make sure OsmoSTP has the maximum insight possible into what's happening on the SCTP transport layer, such as address fail-overs and the like.

    As it turned out, adding that setsockopt for SCTP_FLAGS to my test code made the problem reproducible. After playing around which of the flags, it seems that enabling the SENDER_DRY_EVENT flag makes the bug appear.

    You can find my detailed report about this issue in https://bugzilla.redhat.com/show_bug.cgi?id=1442784 and a program to reproduce the issue at http://people.osmocom.org/laforge/sctp-nonblock/sctp-dry-event.c

    Inside the Osmocom world, luckily we can live without the SENDER_DRY_EVENT and a corresponding work-around has been submitted and merged as https://gerrit.osmocom.org/#/c/2386/

    With that work-around in place, suddenly all the m3ua-testtool and sua-testtool test cases are reliably green (PASSED) and OsmoSTP works more smoothly, too.

    What do we learn from this?

    Free Software in the Telecom sphere is getting too little attention. This is true even those small portions of telecom relevant protocols that ended up in the kernel like SCTP or more recently the GTP module I co-authored. They are getting too little attention in development, even more lack of attention in maintenance, and people seem to focus more on not using it, rather than fixing and maintaining what is there.

    It makes me really sad to see this. Telecoms is such a massive industry, with billions upon billions of revenue for the classic telecom equipment vendors. Surely, they would be able to co-invest in some basic infrastructure like proper and reliable testing / continuous integration for SCTP. More recently, we see millions and more millions of VC cash burned by buzzword-flinging companies doing "NFV" and "SDN". But then rather reimplement network stacks in userspace than to fix, complete and test those little telecom infrastructure components which we have so far, like the SCTP protocol :(

    Where are the contributions to open source telecom parts from Ericsson, Nokia (former NSN), Huawei and the like? I'm not even dreaming about the actual applications / network elements, but merely the maintenance of something as basic as SCTP. To be fair, Motorola was involved early on in the Linux SCTP code, and Huawei contributed a long series of fixes in 2013/2014. But that's not the kind of long-term maintenance contribution that one would normally expect from the primary interest group in SCTP.

    Finally, let me thank to the Linux SCTP maintainers. I'm not complaining about them! They're doing a great job, given the arcane code base and the fact that they are not working for a company that has SCTP based products as their core business. I'm sure the would love more support and contributions from the Telecom world, too.

    by Harald Welte at April 16, 2017 10:00 PM

    April 09, 2017

    Harald Welte

    SIGTRAN/SS7 stack in libosmo-sigtran merged to master

    As I blogged in my blog post in Fabruary, I was working towards a more fully-featured SIGTRAN stack in the Osmocom (C-language) universe.

    The trigger for this is the support of 3GPP compliant AoIP (with a BSSAP/SCCP/M3UA/SCTP protocol stacking), but it is of much more general nature.

    The code has finally matured in my development branch(es) and is now ready for mainline inclusion. It's a series of about 77 (!) patches, some of which already are the squashed results of many more incremental development steps.

    The result is as follows:

    • General SS7 core functions maintaining links, linksets and routes
    • xUA functionality for the various User Adaptations (currently SUA and M3UA supported)
      • MTP User SAP according to ITU-T Q.701 (using osmo_prim)
      • management of application servers (AS)
      • management of application server processes (ASP)
      • ASP-SM and ASP-TM state machine for ASP, AS-State Machine (using osmo_fsm)
      • server (SG) and client (ASP) side implementation
      • validated against ETSI TS 102 381 (by means of Michael Tuexen's m3ua-testtool)
      • support for dynamic registration via RKM (routing key management)
      • osmo-stp binary that can be used as Signal Transfer Point, with the usual "Cisco-style" command-line interface that all Osmocom telecom software has.
    • SCCP implementation, with strong focus on Connection Oriented SCCP (as that's what the A interface uses).
      • osmo_fsm based state machine for SCCP connection, both incoming and outgoing
      • SCCP User SAP according to ITU-T Q.711 (osmo_prim based)
      • Interfaces with underlying SS7 stack via MTP User SAP (osmo_prim based)
      • Support for SCCP Class 0 (unit data) and Class 2 (connection oriented)
      • All SCCP + SUA Address formats (Global Title, SSN, PC, IPv4 Address)
      • SCCP and SUA share one implementation, where SCCP messages are transcoded into SUA before processing, and re-encoded into SCCP after processing, as needed.

    I have already done experimental OsmoMSC and OsmoHNB-GW over to libosmo-sigtran. They're now all just M3UA clients (ASPs) which connect to osmo-stp to exchange SCCP messages back and for the between them.

    What's next on the agenda is to

    • finish my incomplete hacks to introduce IPA/SCCPlite as an alternative to SUA and M3UA (for backwards compatibility)
    • port over OsmoBSC to the SCCP User SAP of libosmo-sigtran
      • validate with SSCPlite lower layer against existing SCCPlite MSCs
    • implement BSSAP / A-interface procedures in OsmoMSC, on top of the SCCP-User SAP.

    If those steps are complete, we will have a single OsmoMSC that can talk both IuCS to the HNB-GW (or RNCs) for 3G/3.5G as well as AoIP towards OsmoBSC. We will then have fully SIGTRAN-enabled the full Osmocom stack, and are all on track to bury the OsmoNITB that was devoid of such interfaces.

    If any reader is interested in interoperability testing with other implementations, either on M3UA or on SCCP or even on A or Iu interface level, please contact me by e-mail.

    by Harald Welte at April 09, 2017 10:00 PM

    April 08, 2017

    Andrew Zonenberg, Silicon Exposed

    STARSHIPRAIDER: Preparing for high-speed I/O characterization

    In my previous post, I characterized the STARSHIPRAIDER I/O circuit for high voltage fault transient performance, but was unable to adequately characterize the high speed data performance because my DSO (Rigol DS1102D) only has 100 MHz of bandwidth.

    Although I did have some ideas on how to improve the performance of the current I/O circuit, it was already faster than I could measure so I had no way to know if my improvements were actually making it any better. Ideally I'd just buy an oscilloscope with several GHz of bandwidth, but I'm not made of money and those scopes tend to be in the "request a quote" price range.

    The obvious solution was to build one. I already had a proven high-speed sampling architecture from my TDR project so all I had to do was repackage it as an oscilloscope and make it faster still.

    The circuit was beautifully simple: an output from the FPGA drives a 50 ohm trace to a SMA connector, then a second SMA connector drives the positive input of an ADCMP572 through a 3 dB attenuator (to keep my signal within range). The negative input is driven by a cheap 12-bit I2C DAC. The comparator output is then converted from CML to LVDS and fed to the host FPGA board. Finally, a 3.3V CML output from the FPGA drives the latch enable input on the comparator.

    The "ADC" algorithm is essentially the same as on my TDR. I like to think of it as an equivalent-time version of a flash ADC: rather than 256 comparators digitizing the signal once, I digitize the signal 256 times with one comparator (and of course 256 different reference voltages). The post-processing to turn the comparator outputs into 8-bit ADC codes is the same.

    Unlike the TDR, however, I also do equivalent-time sampling in the time domain. The FPGA generates the sampling and PRBS clocks with different PLL outputs (at 250 MHz / 4 ns period), and sweeps the relative phase in 100 ps steps to produce an effective resolution of 10 Gsps / 100 ps timebase.

    Without further ado here's a picture of the board. Total BOM cost including connectors and PCB was approximately $50.

    Oscilloscope board (yes, it's PMOD form factor!)
    After some initial firmware development I was able to get some preliminary eye renders off the board. They were, to say the least, not ideal.

    250 Mbps: very bumpy rise
    500 Mbps: significant eye closure even with increased drive strength

    I spent quite a while tracking down other bugs before dealing with the signal integrity issues. For example, a low-frequency pulse train showed up with a very uneven duty cycle:

    Duty cycle distortion
    Someone suggested that I try a slow rise time pulse to show the distortion more clearly. Not having a proper arbitrary waveform generator, I made do with a squarewave and R-C lowpass filter.

    Ever seen breadboarded passives interfacing to edge-launch SMA connectors before?
    It appeared that I had jump discontinuities in my waveform every two blocks (color coding)
    I don't have an EE degree, but I can tell this looks wrong!

    Interestingly enough, two blocks (of 32 samples each) were concatenated into a single JTAG transfer. These two were read in one clock cycle and looked fine, but the junction to the next transfer seemed to be skipping samples.

    As it turned out, I had forgotten to clear a flag which led to me reading the waveform data before it was done capturing. Since the circular buffer was rotating in between packets, some samples never got sent.

    The next bug required zooming into the waveform a bit to see. The samples captured on the first few (the number seemed to vary across bitstream builds) of my 40 clock phases were showing up shifted by 4 ns (one capture clock).

    Horizontally offset samples

    I traced this issue to a synchronizer between clock domains having variable latency depending on the phase offset of the source and destination clocks. This is an inherent issue in clock domain crossing, so I think I'm just going to have to calibrate it out somehow. For the short term I'm manually measuring the number of offset phases each time I recompile the FPGA image, and then correcting the data in post-processing.

    The final issue was a hardware bug. I was terminating the incoming signal with a 50Ω resistor to ground. Although this had good AC performance, at DC the current drawn from a high-level input was quite significant (66 mA at 3.3V). Since my I/O pins can't drive this much, the line was dragged down.

    I decided to rework the input termination to replace the 50Ω terminator with split 100Ω resistors to 3.3V and ground. This should have about half the DC current draw, and is Thevenin equivalent to a 50Ω terminator to 1.65V. As a bonus, the mid-level termination will also allow me to AC-couple the incoming signal if that becomes necessary.

    Mill out trace from ground via to on-die 50Ω termination resistor

    Remove soldermask from ground via and signal trace

    Add 100Ω 0402 low-side terminator
    Add 100Ω 0402 high-side terminator, plus jumper trace to 3.3V bulk decoupling cap

    Add 10 nF high speed decoupling cap to help compensate for inductance of long feeder trace
    I cleaned off all of the flux residue and ran a second set of eye loopback tests at 250 and 500 Mbps. The results were dramatically improved:

    Post-rework at 250 Mbps
    Post-rework at 500 Mbps
    While not perfect, the new eye openings are a lot cleaner. I hope to tweak my input stage further to reduce probing artifacts, but for the time being I think I have sufficient performance to compare multiple STARSHIPRAIDER test circuits and see how they stack up at relatively high speeds.

    Next step: collect some baseline data for the current STARSHIPRAIDER characterization board, then use that to inform my v0.2 I/O circuit!

    by Andrew Zonenberg (noreply@blogger.com) at April 08, 2017 01:50 AM

    March 31, 2017

    Bunnie Studios

    Name that Ware, March 2017

    The Ware for March 2017 is shown below.

    I honestly have no idea what this one is from or what it’s for — found it in a junk pile in China. But I was amused by the comically huge QFP, so I snapped a shot of it.

    Sorry this is a little late — been ridiculously busy prepping for the launch of a line of new products for Chibitronics, going beta (hopefully) next month.

    by bunnie at March 31, 2017 05:52 PM

    Winner, Name that Ware February 2017

    The Ware for February 2017 is a Data Harvest EcoLog.

    A number of people guessed it was a datalogger of some type, but didn’t quite identify the manufacturer or model correctly. That being said, I found Josh Myer’s response an interesting read, so I’ll give the prize to him. Congrats, email me for your prize!

    by bunnie at March 31, 2017 05:52 PM

    March 30, 2017

    Open Hardware Repository

    White Rabbit - Tutorial at ICALEPCS conference in October

    The ICALEPCS organizing committee will organize a pre-conference workshop on WR in Barcelona in October.

    We intend to make this more of a "WR tutorial." and we think there will be something to learn and discuss for everybody: newcomers, casual users and even experts.

    Online registration will open on April 17. Registration for the workshop is independent of registration to the conference. If you register, it will be a great pleasure to see you there. Also, please send me comments on the program if you have any. We still have a bit of freedom to change it if need be.

    And of course, please forward this to any other people you think could be interested!

    Javier

    by Erik van der Bij (Erik.van.der.Bij@cern.ch) at March 30, 2017 01:59 PM

    March 29, 2017

    Free Electrons

    Free Electrons at the Netdev 2.1 conference

    Netdev 2.1 is the fourth edition of the technical conference on Linux networking. This conference is driven by the community and focus on both the kernel networking subsystems (device drivers, net stack, protocols) and their use in user-space.

    This edition will be held in Montreal, Canada, April 6 to 8, and the schedule has been posted recently, featuring amongst other things a talk giving an overview and the current status display of the Distributed Switch Architecture (DSA) or a workshop about how to enable drivers to cope with heavy workloads, to improve performances.

    At Free Electrons, we regularly work on networking related topics, especially as part of our Linux kernel contribution for the support of Marvell or Annapurna Labs ARM SoCs. Therefore, we decided to attend our first Netdev conference to stay up-to-date with the network subsystem and network drivers capabilities, and to learn from the community latest developments.

    Our engineer Antoine Ténart will be representing Free Electrons at this event. We’re looking forward to being there!

    by Antoine Ténart at March 29, 2017 02:35 PM

    March 26, 2017

    Harald Welte

    OsmoCon 2017 Updates: Travel Grants and Schedule

    /images/osmocon.png

    April 21st is approaching fast, so here some updates. I'm particularly happy that we now have travel grants available. So if the travel expenses were preventing you from attending so far: This excuse is no longer valid!

    Get your ticket now, before it is too late. There's a limited number of seats available.

    OsmoCon 2017 Schedule

    The list of talks for OsmoCon 2017 has been available for quite some weeks, but today we finally published the first actual schedule.

    As you can see, the day is fully packed with talks about Osmocom cellular infrastructure projects. We had to cut some talk slots short (30min instead of 45min), but I'm confident that it is good to cover a wider range of topics, while at the same time avoiding fragmenting the audience with multiple tracks.

    OsmoCon 2017 Travel Grants

    We are happy to announce that we have received donations to permit for providing travel grants!

    This means that any attendee who is otherwise not able to cover their travel to OsmoCon 2017 (e.g. because their interest in Osmocom is not related to their work, or because their employer doesn't pay the travel expenses) can now apply for such a travel grant.

    For more details see OsmoCon 2017 Travel Grants and/or contact osmocon2017@sysmocom.de.

    OsmoCon 2017 Social Event

    Tech Talks are nice and fine, but what many people enjoy even more at conferences is the informal networking combined with good food. For this, we have the social event at night, which is open to all attendees.

    See more details about it at OsmoCon 2017 Social Event.

    by Harald Welte at March 26, 2017 10:00 PM

    March 23, 2017

    Harald Welte

    Upcoming v3 of Open Hardware miniPCIe WWAN modem USB breakout board

    Back in October 2016 I designed a small open hardware breakout board for WWAN modems in mPCIe form-factor. I was thinking some other people might be interested in this, and indeed, the first manufacturing batch is already sold out by now.

    Instead of ordering more of the old (v2) design, I decided to do some improvements in the next version:

    • add mounting holes so the PCB can be mounted via M3 screws
    • add U.FL and SMA sockets, so the modems are connected via a short U.FL to U.FL cable, and external antennas or other RF components can be attached via SMA. This provides strain relief for the external antenna or cabling and avoids tearing off any of the current loose U.FL to SMA pigtails
    • flip the SIM slot to the top side of the PCB, so it can be accessed even after mounting the board to some base plate or enclosure via the mounting holes
    • more meaningful labeling of the silk screen, including the purpose of the jumpers and the input voltage.

    A software rendering of the resulting v3 PCB design files that I just sent for production looks like this:

    /images/mpcie-breakout-v3-pcb-rendering.png

    Like before, the design of the board (including schematics and PCB layout design files) is available as open hardware under CC-BY-SA license terms. For more information see http://osmocom.org/projects/mpcie-breakout/wiki

    It will take some expected three weeks until I'll see the first assembled boards.

    I'm also planning to do a M.2 / NGFF version of it, but haven't found the time to get around doing it so far.

    by Harald Welte at March 23, 2017 11:00 PM

    March 21, 2017

    Harald Welte

    Osmocom - personal thoughts

    As I just wrote in my post about TelcoSecDay, I sometimes worry about the choices I made with Osmocom, particularly when I see all the great stuff people doing in fields that I previously was working in, such as applied IT security as well as Linux Kernel development.

    History

    When people like Dieter, Holger and I started to play with what later became OpenBSC, it was just for fun. A challenge to master. A closed world to break open and which to attack with the tools, the mindset and the values that we brought with us.

    Later, Holger and I started to do freelance development for commercial users of Osmocom (initially basically only OpenBSC, but then OsmoSGSN, OsmoBSC, OsmoBTS, OsmoPCU and all the other bits on the infrastructure side). This lead to the creation of sysmocom in 2011, and ever since we are trying to use revenue from hardware sales as well as development contracts to subsidize and grow the Osmocom projects. We're investing most of our earnings directly into more staff that in turn works on Osmocom related projects.

    NOTE

    It's important to draw the distinction betewen the Osmocom cellular infrastructure projects which are mostly driven by commercial users and sysmocom these days, and all the many other pure juts-for-fun community projects under the Osmocom umbrella, like OsmocomTETRA, OsmocomGMR, rtl-sdr, etc. I'm focussing only on the cellular infrastructure projects, as they are in the center of my life during the past 6+ years.

    In order to do this, I basically gave up my previous career[s] in IT security and Linux kernel development (as well as put things like gpl-violations.org on hold). This is a big price to pay for crating more FOSS in the mobile communications world, and sometimes I'm a bit melancholic about the "old days" before.

    Financial wealth is clearly not my primary motivation, but let me be honest: I could have easily earned a shitload of money continuing to do freelance Linux kernel development, IT security or related consulting. There's a lot of demand for related skills, particularly with some experience and reputation attached. But I decided against it, and worked several years without a salary (or almost none) on Osmocom related stuff [as did Holger].

    But then, even with all the sacrifices made, and the amount of revenue we can direct from sysmocom into Osmocom development: The complexity of cellular infrastructure vs. the amount of funding and resources is always only a fraction of what one would normally want to have to do a proper implementation. So it's constant resource shortage, combined with lots of unpaid work on those areas that are on the immediate short-term feature list of customers, and that nobody else in the community feels like he wants to work on. And that can be a bit frustrating at times.

    Is it worth it?

    So after 7 years of OpenBSC, OsmocomBB and all the related projects, I'm sometimes asking myself whether it has been worth the effort, and whether it was the right choice.

    It was right from the point that cellular technology is still an area that's obscure and unknown to many, and that has very little FOSS (though Improving!). At the same time, cellular networks are becoming more and more essential to many users and applications. So on an abstract level, I think that every step in the direction of FOSS for cellular is as urgently needed as before, and we have had quite some success in implementing many different protocols and network elements. Unfortunately, in most cases incompletely, as the amount of funding and/or resources were always extremely limited.

    Satisfaction/Happiness

    On the other hand, when it comes to metrics such as personal satisfaction or professional pride, I'm not very happy or satisfied. The community remains small, the commercial interest remains limited, and as opposed to the Linux world, most players have a complete lack of understanding that FOSS is not a one-way road, but that it is important for all stakeholders to contribute to the development in terms of development resources.

    Project success?

    I think a collaborative development project (which to me is what FOSS is about) is only then truly successful, if its success is not related to a single individual, a single small group of individuals or a single entity (company). And no matter how much I would like the above to be the case, it is not true for the Osmocom cellular infrastructure projects. Take away Holger and me, or take away sysmocom, and I think it would be pretty much dead. And I don't think I'm exaggerating here. This makes me sad, and after all these years, and after knowing quite a number of commercial players using our software, I would have hoped that the project rests on many more shoulders by now.

    This is not to belittle the efforts of all the people contributing to it, whether the team of developers at sysmocom, whether those in the community that still work on it 'just for fun', or whether those commercial users that contract sysmocom for some of the work we do. Also, there are known and unknown donors/funders, like the NLnet foundation for some parts of the work. Thanks to all of you, and clearly we wouldn't be where we are now without all of that!

    But I feel it's not sufficient for the overall scope, and it's not [yet] sustainable at this point. We need more support from all sides, particularly those not currently contributing. From vendors of BTSs and related equipment that use Osmocom components. From operators that use it. From individuals. From academia.

    Yes, we're making progress. I'm happy about new developments like the Iu and Iuh support, the OsmoHLR/VLR split and 2G/3G authentication that Neels just blogged about. And there's progress on the SIMtrace2 firmware with card emulation and MITM, just as well as there's progress on libosmo-sigtran (with a more complete SUA, M3UA and connection-oriented SCCP stack), etc.

    But there are too little people working on this, and those people are mostly coming from one particular corner, while most of the [commercial] users do not contribute the way you would expect them to contribute in collaborative FOSS projects. You can argue that most people in the Linux world also don't contribute, but then the large commercial beneficiaries (like the chipset and hardware makers) mostly do, as are the large commercial users.

    All in all, I have the feeling that Osmocom is as important as it ever was, but it's not grown up yet to really walk on its own feet. It may be able to crawl, though ;)

    So for now, don't panic. I'm not suffering from burn-out, mid-life crisis and I don't plan on any big changes of where I put my energy: It will continue to be Osmocom. But I also think we have to have a more open discussion with everyone on how to move beyond the current situation. There's no point in staying quiet about it, or to claim that everything is fine the way it is. We need more commitment. Not from the people already actively involved, but from those who are not [yet].

    If that doesn't happen in the next let's say 1-2 years, I think it's fair that I might seriously re-consider in which field and in which way I'd like to dedicate my [I would think considerable] productive energy and focus.

    by Harald Welte at March 21, 2017 06:00 PM

    Returning from TelcoSecDay 2017 / General Musings

    I'm just on my way back from the Telecom Security Day 2017 <https://www.troopers.de/troopers17/telco-sec-day/>, which is an invitation-only event about telecom security issues hosted by ERNW back-to-back with their Troopers 2017 <https://www.troopers.de/troopers17/> conference.

    I've been presenting at TelcoSecDay in previous years and hence was again invited to join (as attendee). The event has really gained quite some traction. Where early on you could find lots of IT security / hacker crowds, the number of participants from the operator (and to smaller extent also equipment maker) industry has been growing.

    The quality of talks was great, and I enjoyed meeting various familiar faces. It's just a pity that it's only a single day - plus I had to head back to Berlin still today so I had to skip the dinner + social event.

    When attending events like this, and seeing the interesting hacks that people are working on, it pains me a bit that I haven't really been doing much security work in recent years. netfilter/iptables was at least somewhat security related. My work on OpenPCD / librfid was clearly RFID security oriented, as was the work on airprobe, OsmocomTETRA, or even the EasyCard payment system hack

    I have the same feeling when attending Linux kernel development related events. I have very fond memories of working in both fields, and it was a lot of fun. Also, to be honest, I believe that the work in Linux kernel land and the general IT security research was/is appreciated much more than the endless months and years I'm now spending my time with improving and extending the Osmocom cellular infrastructure stack.

    Beyond the appreciation, it's also the fact that both the IT security and the Linux kernel communities are much larger. There are more people to learn from and learn with, to engage in discussions and ping-pong ideas. In Osmocom, the community is too small (and I have the feeling, it's actually shrinking), and in many areas it rather seems like I am the "ultimate resource" to ask, whether about 3GPP specs or about Osmocom code structure. What I'm missing is the feeling of being part of a bigger community. So in essence, my current role in the "Open Source Cellular" corner can be a very lonely one.

    But hey, I don't want to sound more depressed than I am, this was supposed to be a post about TelcoSecDay. It just happens that attending IT Security and/or Linux Kernel events makes me somewhat gloomy for the above-mentioned reasons.

    Meanwhile, if you have some interesting projcets/ideas at the border between cellular protocols/systems and security, I'd of course love to hear if there's some way to get my hands dirty in that area again :)

    by Harald Welte at March 21, 2017 05:00 PM

    March 16, 2017

    Open Hardware Repository

    CERN BE-CO-HT contribution to KiCad - Support of free software in public institutions:

    At the Octave conference in Geneva the presentation Support of free software in public institutions: the KiCad case will be given by Javier Serrano and Tomasz Wlostowski from CERN.

    KiCad is a tool to help electronics designers develop Printed Circuit Boards (PCB). CERN's BE-CO-HT section has been contributing to its development since 2011. These efforts are framed in the context of CERN's activities regarding Open Source Hardware (OSHW), and are meant to provide an environment where design files for electronics can be shared in an efficient way, without the hurdles imposed by the use of proprietary formats.

    The talk will start by providing some context about OSHW and the importance of using Free Software tools for sharing design files. We will then move on to a short KiCad tutorial, and finish with some considerations about the role public institutions can play in developing and fostering the use of Free Software, and whether some of the KiCad experience can apply in other contexts.

    Access to the presentation: Support of free software in public institutions: the KiCad case

    by Erik van der Bij (Erik.van.der.Bij@cern.ch) at March 16, 2017 04:42 PM

    March 15, 2017

    Bunnie Studios

    Looking for Summer Internship in Hardware Hacking?

    Tim Ansell (mithro), who has been giving me invaluable advice and support on the NeTV2 project, just had his HDMI (plaintext) video capture project accepted into the Google Summer of Code. This summer, he’s looking for university students who have an interest in learning FPGAs, hacking on video, or designing circuits. To learn more you can check out his post at hdmi2usb.tv.

    I’ve learned a lot working with Tim. I also respect his work ethic and he is a steadfast contributor to the open source community. This would be an excellent summer opportunity for any student interested in system-level hardware hacking!

    Please note: application deadline is April 3 16:00 UTC.

    by bunnie at March 15, 2017 07:47 AM

    March 10, 2017

    Free Electrons

    Free Electrons at the Embedded Linux Conference 2017

    Last month, five engineers from Free Electrons participated to the Embedded Linux Conference in Portlan, Oregon. It was once again a great conference to learn new things about embedded Linux and the Linux kernel, and to meet developers from the open-source community.

    Free Electrons team at work at ELC 2017, with Maxime Ripard, Antoine Ténart, Mylène Josserand and Quentin Schulz

    Free Electrons talks

    Free Electrons CEO Michael Opdenacker gave a talk on Embedded Linux Size Reduction techniques, for which the slides and video are available:

    Free Electrons engineer Quentin Schulz gave a talk on Power Management Integrated Circuits: Keep the Power in Your Hands, the slides and video are also available:

    Free Electrons selection of talks

    Of course, the slides from many other talks are progressively being uploaded, and the Linux Foundation published the video recordings in a record time: they are all already available on Youtube!

    Below, each Free Electrons engineer who attended the conference has selected one talk he/she has liked, and gives a quick summary of the talk, hopefully to encourage you watch the corresponding video recording.

    Using SWupdate to Upgrade your system, Gabriel Huau

    Talk selected by Mylène Josserand.

    Gabriel Huau from Witekio did a great talk at ELC about SWUpdate, a tool created by Denx to update your system. The talk gives an overview of this tool, how it is working and how to use it. Updating your system is very important for embedded devices to fix some bugs/security fixes or add new features, but in an industrial context, it is sometimes difficult to perform an update: devices not easily accessible, large number of devices and variants, etc. A tool that can update the system automatically or even Over The Air (OTA) can be very useful. SWUpdate is one of them.

    SWUpdate allows to update different parts of an embedded system such as the bootloader, the kernel, the device tree, the root file system and also the application data.
    It handles different image types: UBI, MTD, Raw, Custom LUA, u-boot environment and even your custom one. It includes a notifier to be able to receive feedback about the updating process which can be useful in some cases. SWUPdate uses different local and OTA/remote interfaces such as USB, SD card, HTTP, etc. It is based on a simple update image format to indicate which images must be updated.

    Many customizations can be done with this tool as it is provided with the classic menuconfig configuration tool. One great thing is that this tool is supported by Yocto Project and Buildroot so it can be easily tested.

    Do not hesitate to have a look to his slides, the video of his talk or directly test SWUpdate!

    GCC/Clang Optimizations for embedded Linux, Khem Raj

    Talk selected by Michael Opdenacker.

    Khem Raj from Comcast is a frequent speaker at the Embedded Linux Conference, and one of his strong fields of expertise is C compilers, especially LLVM/Clang and Gcc. His talk at this conference can interest anyone developing code in the C language, to know about optimizations that the compilers can use to improve the performance or size of generated binaries. See the video and slides.

    Khem Raj slide about compiler optimization optionsOne noteworthy optimization is Clang’s -Oz (Gcc doesn’t have it), which goes even beyond -Os, by disabling loop vectorization. Note that Clang already performs better than Gcc in terms of code size (according to our own measurements). On the topic of bundle optimizations such as -O2 or -Os, Khem added that specific optimizations can be disabled in both compilers through the -fno- command line option preceding the name of a given optimization. The name of each optimization in a given bundle can be found through the -fverbose-asm command line option.

    Another new optimization option is -Og, which is different from the traditional -g option. It still allows to produce code that can be debugged, but in a way that provides a reasonable level of runtime performance.

    On the performance side, he also recalled the Feedback-Directed Optimizations (FDO), already covered in earlier Embedded Linux Conferences, which can be used to feed the compiler with profiler statistics about code branches. The compiler can use such information to optimize branches which are the more frequent at run-time.

    Khem’s last advise was not to optimize too early, and first make sure you do your debugging and profiling work first, as heavily optimized code can be very difficult to debug. Therefore, optimizations are for well-proven code only.

    Note that Khem also gave a similar talk in the IoT track for the conference, which was more focused on bare-metal code optimization code and portability: “Optimizing C for microcontrollers” (slides, video).

    A Journey through Upstream Atomic KMS to Achieve DP Compliance, Manasi Navare

    Talk selected by Quentin Schulz.

    This talk was about the journey of a new comer in the mainline kernel community to fix the DisplayPort support in Intel i915 DRM driver. It first presented what happens from the moment we plug a cable in a monitor until we actually see an image, then where the driver is in the kernel: in the DRM subsystem, between the hardware (an Intel Integrated Graphics device) and the libdrm userspace library on which userspace applications such as the X server rely.

    The bug to fix was that case when the driver would fail after updating to the requested resolution for a DP link. The other existing drivers usually fail before updating the resolution, so Manasi had to add a way to tell the userspace the DP link failed after updating the resolution. Such addition would be useless without applications using this new information, therefore she had to work with their developers to make the applications behave correctly when reading this important information.

    With a working set of patches, she thought she had done most of the work with only the upstreaming left and didn’t know it would take her many versions to make it upstream. She wished to have sent a first version of a driver for review earlier to save time over the whole development plus upstreaming process. She also had to make sure the changes in the userspace applications will be ready when the driver will be upstreamed.

    The talk was a good introduction on how DisplayPort works and an excellent example on why involving the community even in early stages of the development process may be a good idea to quicken the overall driver development process by avoiding complete rewriting of some code parts when upstreaming is under way.

    See also the video and slides of the talk.

    Timekeeping in the Linux Kernel, Stephen Boyd

    Talk selected by Maxime Ripard.

    Stephen did a great talk about one thing that is often overlooked, and really shouldn’t: Timekeeping. He started by explaining the various timekeeping mechanisms, both in hardware and how Linux use them. That meant covering the counters, timers, the tick, the jiffies, and the various POSIX clocks, and detailing the various frameworks using them. He also explained the various bugs that might be encountered when having a too naive counter implementation for example, or using the wrong POSIX clock from an application.

    See also the video and slides of the talk.

    Android Things, Karim Yaghmour

    Talk selected by Antoine Ténart

    Karim did a very good introduction to Android Things. His talk was a great overview of what this new OS from Google targeting embedded devices is, and where it comes from. He started by showing the history of Android, and he explained what this system brought to the embedded market. He then switched to the birth of Android Things; a reboot of Google’s strategy for connected devices. He finally gave an in depth explanation of the internals of this new OS, by comparing Android Things and Android, with lots of examples and demos.

    Android Things replaces Brillo / Weave, and unlike its predecessor is built reusing available tools and services. It’s in fact a lightweight version of Android, with many services removed and a few additions like the PIO API to drive GPIO, I2C, PWM or UART controllers. A few services were replaced as well, most notably the launcher. The result is a not so big, but not so small, system that can run on headless devices to control various sensors; with an Android API for application developers.

    See also the video and slides of the talk.

    by Thomas Petazzoni at March 10, 2017 09:01 AM

    March 07, 2017

    Harald Welte

    VMware becomes gold member of Linux Foundation: And what about the GPL?

    As we can read in recent news, VMware has become a gold member of the Linux foundation. That causes - to say the least - very mixed feelings to me.

    One thing to keep in mind: The Linux Foundation is an industry association, it exists to act in the joint interest of it's paying members. It is not a charity, and it does not act for the public good. I know and respect that, while some people sometimes appear to be confused about its function.

    However, allowing an entity like VMware to join, despite their many years long disrespect for the most basic principles of the FOSS Community (such as: Following the GPL and its copyleft principle), really is hard to understand and accept.

    I wouldn't have any issue if VMware would (prior to joining LF) have said: Ok, we had some bad policies in the past, but now we fully comply with the license of the Linux kernel, and we release all derivative/collective works in source code. This would be a positive spin: Acknowledge past issues, resolve the issues, become clean and then publicly underlining your support of Linux by (among other things) joining the Linux Foundation. I'm not one to hold grudges against people who accept their past mistakes, fix the presence and then move on. But no, they haven't fixed any issues.

    They are having one of the worst track records in terms of intentional GPL compliance issues for many years, showing outright disrespect for Linux, the GPL and ultimately the rights of the Linux developers, not resolving those issues and at the same time joining the Linux Foundation? What kind of message sends that?

    It sends the following messages:

    • you can abuse Linux, the GPL and copyleft while still being accepted amidst the Linux Foundation Members
    • it means the Linux Foundations has no ethical concerns whatsoever about accepting such entities without previously asking them to become clean
    • it also means that VMware has still not understood that Linux and FOSS is about your actions, particularly the kind of choices you make how to technically work with the community, and not against it.

    So all in all, I think this move has seriously damaged the image of both entities involved. I wouldn't have expected different of VMware, but I would have hoped the Linux Foundation had some form of standards as to which entities they permit amongst their ranks. I guess I was being overly naive :(

    It's a slap in the face of every developer who writes code not because he gets paid, but because it is rewarding to know that copyleft will continue to ensure the freedom of related code.

    UPDATE (March 8, 2017):
     I was mistaken in my original post in that VMware didn't just join, but was a Linux Foundation member already before, it is "just" their upgrade from silver to gold that made the news recently. I stand corrected. Still doesn't make it any better that the are involved inside LF while engaging in stepping over the lines of license compliance.
    UPDATE2 (March 8, 2017):
     As some people pointed out, there is no verdict against VMware. Yes, that's true. But the mere fact that they rather distribute derivative works of GPL licensed software and take this to court with an armada of lawyers (instead of simply complying with the license like everyone else) is sad enough. By the time there will be a final verdict, the product is EOL. That's probably their strategy to begin with :/

    by Harald Welte at March 07, 2017 11:00 PM

    Gory details of USIM authentication sequence numbers

    I always though I understood UMTS AKA (authentication and key agreement), including the re-synchronization procedure. It's been years since I wrote tools like osmo-sim-auth which you can use to perform UMTS AKA with a SIM card inserted into a PC reader, i.e. simulate what happens between the AUC (authentication center) in a network and the USIM card.

    However, it is only now as the sysmocom team works on 3G support of the dedicated OsmoHLR (outside of OsmoNITB!), that I seem to understand all the nasty little details.

    I always thought for re-synchronization it is sufficient to simply increment the SQN (sequence number). It turns out, it isn't as there is a MSB-portion called SEQ and a lower-bit portion called IND, used for some fancy array indexing scheme of buckets of highest-used-SEQ within that IND bucket.

    If you're interested in all the dirty details and associated spec references (the always hide the important parts in some Annex) see the discussion between Neels and me in Osmocom redmine issue 1965.

    by Harald Welte at March 07, 2017 11:00 PM

    March 05, 2017

    Harald Welte

    GTA04 project halts GTA04A5 due to OMAP3 PoP soldering issues

    For those of you who don't know what the tinkerphones/OpenPhoenux GTA04 is: It is a 'professional hobbyist' hardware project (with at least public schematics, even if not open hardware in the sense that editable schematics and PCB design files are published) creating updated mainboards that can be used to upgrade Openmoko phones. They fit into the same enclosure and can use the same display/speaker/microphone.

    What the GTA04 guys have been doing for many years is close to a miracle anyway: Trying to build a modern-day smartphone in low quantities, using off-the-shelf components available in those low quantities, and without a large company with its associated financial backing.

    Smartphones are complex because they are highly integrated devices. A seemingly unlimited amount of components is squeezed in the tiniest form-factors. This leads to complex circuit boards with many layers that take a lot of effort to design, and are expensive to build in low quantities. The fine-pitch components mandated by the integration density is another issue.

    Building the original GTA01 (Neo1937) and GTA02 (FreeRunner) devices at Openmoko, Inc. must seem like a piece of cake compared to what the GTA04 guys are up to. We had a team of engineers that were familiar at last with feature phone design before, and we had the backing of a consumer electronics company with all its manufacturing resources and expertise.

    Nevertheless, a small group of people around Dr. Nikolaus Schaller has been pushing the limits of what you can do in a small for fun project, and the have my utmost respect. Well done!

    Unfortunately, there are bad news. Manufacturing of their latest generation of phones (GTA04A5) has been stopped due to massive soldering problems with the TI OMAP3 package-on-package (PoP). Those PoPs are basically "RAM chip soldered onto the CPU, and the stack of both soldered to the PCB". This is used to save PCB footprint and to avoid having to route tons of extra (sensitive, matched) traces between the SDRAM and the CPU.

    According to the mailing list posts, it seems to be incredibly difficult to solder the PoP stack due to the way TI has designed the packaging of the DM3730. If you want more gory details, see this post and yet another post.

    It is very sad to see that what appears to be bad design choices at TI are going to bring the GTA04 project to a halt. The financial hit by having only 33% yield is already more than the small community can take, let alone unused parts that are now in stock or even thinking about further experiments related to the manufacturability of those chips.

    If there's anyone with hands-on manufacturing experience on the DM3730 (or similar) TI PoP reading this: Please reach out to the GTA04 guys and see if there's anything that can be done to help them.

    UPDATE (March 8, 2017):
     In an earlier post I was asserting that the GTA04 is open hardware (which I actually believed up to that point) until some readers have pointed out to me that it isn't. It's sad it isn't, but still it has my sympathies.

    by Harald Welte at March 05, 2017 11:00 PM

    March 04, 2017

    Free Electrons

    Buildroot 2017.02 released, Free Electrons contributions

    Buildroot LogoThe 2017.02 version of Buildroot has been released recently, and as usual Free Electrons has been a significant contributor to this release. A total of 1369 commits have gone into this release, contributed by 110 different developers.

    Before looking in more details at the contributions from Free Electrons, let’s have a look at the main improvements provided by this release:

    • The big announcement is that 2017.02 is going to be a long term support release, maintained with security and other important fixes for one year. This will allow companies, users and projects that cannot upgrade at each Buildroot release to have a stable Buildroot version to work with, coming with regular updates for security and bug fixes. A few fixes have already been collected in the 2017.02.x branch, and regular point releases will be published.
    • Several improvements have been made to support reproducible builds, i.e the capability of having two builds of the same configuration provide the exact same bit-to-bit output. These are not enough to provide reproducible builds yet, but they are a piece of the puzzle, and more patches are pending for the next releases to move forward on this topic.
    • A package infrastructure for packages using the waf build system has been added. Seven packages in Buildroot are using this infrastructure currently.
    • Support for the OpenRISC architecture has been added, as well as improvements to the support of ARM64 (selection of ARM64 cores, possibility of building an ARM 32-bit system optimized for an ARM64 core).
    • The external toolchain infrastructure, which was all implemented in a single very complicated package, has been split into one package per supported toolchain and a common infrastructure. This makes it much easier to maintain.
    • A number of updates has been made to the toolchain components and capabilities: uClibc-ng bumped to 1.0.22 and enabled for ARM64, mips32r6 and mips64r6, gdb 7.12.1 added and switched to gdb 7.11 as the default, Linaro toolchains updated to 2016.11, ARC toolchain components updated to arc-2016.09, MIPS Codescape toolchains bumped to 2016.05-06, CodeSourcery AMD64 and NIOS2 toolchains bumped.
    • Eight new defconfigs for various hardware platforms have been added, including defconfigs for the NIOSII and OpenRISC Qemu emulation.
    • Sixty new packages have been added, and countless other packages have been updated or fixed.

    Buildroot developers at work during the Buildroot Developers meeting in February 2017, after the FOSDEM conference in Brussels.

    More specifically, the contributions from Free Electrons have been:

    • Thomas Petazzoni has handled the release of the first release candidate, 2017.02-rc1, and merged 742 patches out of the 1369 commits merged in this release.
    • Thomas contributed the initial work for the external toolchain infrastructure rework, which has been taken over by Romain Naour and finally merged thanks to Romain’s work.
    • Thomas contributed the rework of the ARM64 architecture description, to allow building an ARM 32-bit system optimized for a 64-bit core, and to allow selecting specific ARM64 cores.
    • Thomas contributed the raspberrypi-usbboot package, which packages a host tool that allows to boot a RaspberryPi system over USB.
    • Thomas fixed a large number of build issues found by the project autobuilders, contributing 41 patches to this effect.
    • Mylène Josserand contributed a patch to the X.org server package, fixing an issue with the i.MX6 OpenGL acceleration.
    • Gustavo Zacarias contributed a few fixes on various packages.

    In addition, Free Electrons sponsored the participation of Thomas to the Buildroot Developers meeting that took place after the FOSDEM conference in Brussels, early February. A report of this meeting is available on the eLinux Wiki.

    The details of Free Electrons contributions:

    by Thomas Petazzoni at March 04, 2017 08:53 PM

    February 27, 2017

    Bunnie Studios

    Name that Ware, February 2017

    The ware for February 2017 is shown below:

    This is a ware contributed by an anonymous reader. Thanks for the contribution, you know who you are!

    by bunnie at February 27, 2017 08:04 AM

    Winner, Name that Ware January 2017

    The Ware for January 2017 is a Philips Norelco shaver, which recently died so I thought I’d take it apart and see what’s inside. It’s pretty similar to the previous generation shaver I was using. Hard to pick a winner — Jimmyjo got the thread on the right track, Adrian got the reference to the prior blog post…from 8 years ago. I think I’ll run with with Jimmyjo as the winner though, since it looks from the time stamps he was the first to push the thread into the general category of electric shaver. Congrats, email me to claim your prize (again)!

    by bunnie at February 27, 2017 08:04 AM

    February 24, 2017

    Free Electrons

    Linux 4.10, Free Electrons contributions

    After 8 release candidates, Linus Torvalds released the final 4.10 Linux kernel last Sunday. A total of 13029 commits were made between 4.9 and 4.10. As usual, LWN had a very nice coverage of the major new features added during the 4.10 merge window: part 1, part 2 and part 3. The KernelNewbies Wiki has an updated page about 4.10 as well.

    On the total of 13029 commits, 116 were made by Free Electrons engineers, which interestingly is exactly the same number of commits we made for the 4.9 kernel release!

    Our main contributions for this release have been:

    • For Atmel platforms, Alexandre Belloni added support for the securam block of the SAMA5D2, which is needed to implement backup mode, a deep suspend-to-RAM state for which we will be pushing patches over the next kernel releases. Alexandre also fixed some bugs in the Atmel dmaengine and USB gadget drivers.
    • For Allwinner platforms
      • Antoine Ténart enabled the 1-wire controller on the CHIP platform
      • Boris Brezillon fixed an issue in the NAND controller driver, that prevented from using ECC chunks of 512 bytes.
      • Maxime Ripard added support for the CHIP Pro platform from NextThing, together with many addition of features to the underlying SoC, the GR8 from Nextthing.
      • Maxime Ripard implemented audio capture support in the sun4i-i2s driver, bringing capture support to Allwinner A10 platforms.
      • Maxime Ripard added clock support for the Allwinner A64 to the sunxi-ng clock subsystem, and implemented numerous improvements for this subsystem.
      • Maxime Ripard reworked the pin-muxing driver on Allwinner platforms to use a new generic Device Tree binding, and deprecated the old platform-specific Device Tree binding.
      • Quentin Schulz added a MFD driver for the Allwinner A10/A13/A31 hardware block that provides ADC, touchscreen and thermal sensor functionality.
    • For the RaspberryPi platform
      • Boris Brezillon added support for the Video Encoder IP, which provides composite output. See also our recent blog post about our RaspberryPi work.
      • Boris Brezillon made a number of improvements to clock support on the RaspberryPi, which were needed for the Video Encoder IP support.
    • For the Marvell ARM platform
      • Grégory Clement enabled networking support on the Marvell Armada 3700 SoC, a Cortex-A53 based processor.
      • Grégory Clement did a large number of cleanups in the Device Tree files of Marvell platforms, fixing DTC warnings, and using node labels where possible.
      • Romain Perier contributed a brand new driver for the SPI controller of the Marvell Armada 3700, and therefore enabled SPI support on this platform.
      • Romain Perier extended the existing i2c-pxa driver to support the Marvell Armada 3700 I2C controller, and enabled I2C support on this platform.
      • Romain Perier extended the existing hardware number generator driver for OMAP to also be usable for SafeXcel EIP76 from Inside Secure. This allows to use this driver on the Marvell Armada 7K/8K SoC.
      • Romain Perier contributed support for the Globalscale EspressoBin board, a low-cost development board based on the Marvell Armada 3700.
      • Romain Perier did a number of fixes to the CESA driver, used for the cryptographic engine found on 32-bit Marvell SoCs, such as Armada 370, XP or 38x.
      • Thomas Petazzoni fixed a bug in the mvpp2 network driver, currently only used on Marvell Armada 375, but in the process of being extended to be used on Marvell Armada 7K/8K as well.
    • As the maintainer of the MTD NAND subsystem, Boris Brezillon did a few cleanups in the Tango NAND controller driver, added support for the TC58NVG2S0H NAND chip, and improved the core NAND support to accommodate controllers that have some special timing requirements.
    • As the maintainer of the RTC subsystem, Alexandre Belloni did a number of small cleanups and improvements, especially to the jz4740

    Here is the detailed list of our commits to the 4.10 release:

    by Thomas Petazzoni at February 24, 2017 09:12 AM

    February 23, 2017

    Harald Welte

    Manual testing of Linux Kernel GTP module

    In May 2016 we got the GTP-U tunnel encapsulation/decapsulation module developed by Pablo Neira, Andreas Schultz and myself merged into the 4.8.0 mainline kernel.

    During the second half of 2016, the code basically stayed untouched. In early 2017, several patch series of (at least) three authors have been published on the netdev mailing list for review and merge.

    This poses the very valid question on how do we test those (sometimes quite intrusive) changes. Setting up a complete cellular network with either GPRS/EGPRS or even UMTS/HSPA is possible using OsmoSGSN and related Osmocom components. But it's of course a luxury that not many Linux kernel networking hackers have, as it involves the availability of a supported GSM BTS or UMTS hNodeB. And even if that is available, there's still the issue of having a spectrum license, or a wired setup with coaxial cable.

    So as part of the recent discussions on netdev, I tested and described a minimal test setup using libgtpnl, OpenGGSN and sgsnemu.

    This setup will start a mobile station + SGSN emulator inside a Linux network namespace, which talks GTP-C to OpenGGSN on the host, as well as GTP-U to the Linux kernel GTP-U implementation.

    In case you're interested, feel free to check the following wiki page: https://osmocom.org/projects/linux-kernel-gtp-u/wiki/Basic_Testing

    This is of course just for manual testing, and for functional (not performance) testing only. It would be great if somebody would pick up on my recent mail containing some suggestions about an automatic regression testing setup for the kernel GTP-U code. I have way too many spare-time projects in desperate need of some attention to work on this myself. And unfortunately, none of the telecom operators (who are the ones benefiting most from a Free Software accelerated GTP-U implementation) seems to be interested in at least co-funding or otherwise contributing to this effort :/

    by Harald Welte at February 23, 2017 11:00 PM

    February 20, 2017

    Free Electrons

    Free Electrons and Raspberry Pi Linux kernel upstreaming

    Raspberry Pi logoFor a few months, Free Electrons has been helping the Raspberry Pi Foundation upstream to the Linux kernel a number of display related features for the Rasperry Pi platform.

    The main goal behind this upstreaming process is to get rid of the closed-source firmware that is used on non-upstream kernels every time you need to enable/access a specific hardware feature, and replace it by something that is both open-source and compliant with upstream Linux standards.

    Eric Anholt has been working hard to upstream display related features. His biggest contribution has certainly been the open-source driver for the VC4 GPU, but he also worked on the display controller side, and we were contracted to help him with this task.

    Our first objective was to add support for SDTV (composite) output, which appeared to be much easier than we imagined. As some of you might already know, the display controller of the Raspberry Pi already has a driver in the DRM subsystem. Our job was to add support for the SDTV encoder (also called VEC, for Video EnCoder). The driver has been submitted just before the 4.10 merge window and surprisingly made it into 4.10 (see also the patches). Eric Anholt explained on his blog:

    The Raspberry Pi Foundation recently started contracting with Free Electrons to give me some support on the display side of the stack. Last week I got to review and release their first big piece of work: Boris Brezillon’s code for SDTV support. I had suggested that we use this as the first project because it should have been small and self contained. It ended up that we had some clock bugs Boris had to fix, and a bug in my core VC4 CRTC code, but he got a working patch series together shockingly quickly. He did one respin for a couple more fixes once I had tested it, and it’s now out on the list waiting for devicetree maintainer review. If nothing goes wrong, we should have composite out support in 4.11 (we’re probably a week late for 4.10).

    Our second objective was to help Eric with HDMI audio support. The code has been submitted on the mailing list 2 weeks ago and will hopefully be queued for 4.12. This time on, we didn’t write much code, since Eric already did the bulk of the work. What we did though is debugging the implementation to make it work. Eric also explained on his blog:

    Probably the biggest news of the last two weeks is that Boris’s native HDMI audio driver is now on the mailing list for review. I’m hoping that we can get this merged for 4.12 (4.10 is about to be released, so we’re too late for 4.11). We’ve tested stereo audio so far, no compresesd audio (though I think it should Just Work), and >2 channel audio should be relatively small amounts of work from here. The next step on HDMI audio is to write the alsalib configuration snippets necessary to hide the weird details of HDMI audio (stereo IEC958 frames required) so that sound playback works normally for all existing userspace, which Boris should have a bit of time to work on still.

    On our side, it has been a great experience to work on such topics with Eric, and you should expect more contributions from Free Electrons for the Raspberry Pi platform in the next months, so stay tuned!

    by Boris Brezillon at February 20, 2017 04:23 PM

    February 15, 2017

    Harald Welte

    Cellular re-broadcast over satellite

    I've recently attended a seminar that (among other topics) also covered RF interference hunting. The speaker was talking about various real-world cases of RF interference and illustrating them in detail.

    Of course everyone who has any interest in RF or cellular will know about fundamental issues of radio frequency interference. To the biggest part, you have

    • cells of the same operator interfering with each other due to too frequent frequency re-use, adjacent channel interference, etc.
    • cells of different operators interfering with each other due to intermodulation products and the like
    • cells interfering with cable TV, terrestrial TV
    • DECT interfering with cells
    • cells or microwave links interfering with SAT-TV reception
    • all types of general EMC problems

    But what the speaker of this seminar covered was actually a cellular base-station being re-broadcast all over Europe via a commercial satellite (!).

    It is a well-known fact that most satellites in the sky are basically just "bent pipes", i.e. they consist of a RF receiver on one frequency, a mixer to shift the frequency, and a power amplifier. So basically whatever is sent up on one frequency to the satellite gets re-transmitted back down to earth on another frequency. This is abused by "satellite hijacking" or "transponder hijacking" and has been covered for decades in various publications.

    Ok, but how does cellular relate to this? Well, apparently some people are running VSAT terminals (bi-directional satellite terminals) with improperly shielded or broken cables/connectors. In that case, the RF emitted from a nearby cellular base station leaks into that cable, and will get amplified + up-converted by the block up-converter of that VSAT terminal.

    The bent-pipe satellite subsequently picks this signal up and re-transmits it all over its coverage area!

    I've tried to find some public documents about this, an there's surprisingly little public information about this phenomenon.

    However, I could find a slide set from SES, presented at a Satellite Interference Reduction Group: Identifying Rebroadcast (GSM)

    It describes a surprisingly manual and low-tech approach at hunting down the source of the interference by using an old nokia net-monitor phone to display the MCC/MNC/LAC/CID of the cell. Even in 2011 there were already open source projects such as airprobe that could have done the job based on sampled IF data. And I'm not even starting to consider proprietary tools.

    It should be relatively simple to have a SDR that you can tune to a given satellite transponder, and which then would look for any GSM/UMTS/LTE carrier within its spectrum and dump their identities in a fully automatic way.

    But then, maybe it really doesn't happen all that often after all to rectify such a development...

    by Harald Welte at February 15, 2017 11:00 PM

    February 13, 2017

    Free Electrons

    Power measurement with BayLibre’s ACME cape

    When working on optimizing the power consumption of a board we need a way to measure its consumption. We recently bought an ACME from BayLibre to do that.

    Overview of the ACME

    The ACME is an extension board for the BeagleBone Black, providing multi-channel power and temperature measurements capabilities. The cape itself has eight probe connectors allowing to do multi-channel measurements. Probes for USB, Jack or HE10 can be bought separately depending on boards you want to monitor.

    acme

    Last but not least, the ACME is fully open source, from the hardware to the software.

    First setup

    Ready to use pre-built images are available and can be flashed on an SD card. There are two different images: one acting as a standalone device and one providing an IIO capture daemon. While the later can be used in automated farms, we chose the standalone image which provides user-space tools to control the probes and is more suited to power consumption development topics.

    The standalone image userspace can also be built manually using Buildroot, a provided custom configuration and custom init scripts. The kernel should be built using a custom configuration and the device tree needs to be patched.

    Using the ACME

    To control the probes and get measured values the Sigrok software is used. There is currently no support to send data over the network. Because of this limitation we need to access the BeagleBone Black shell through SSH and run our commands there.

    We can display information about the detected probe, by running:

    # sigrok-cli --show --driver=baylibre-acme
    Driver functions:
        Continuous sampling
        Sample limit
        Time limit
        Sample rate
    baylibre-acme - BayLibre ACME with 3 channels: P1_ENRG_PWR P1_ENRG_CURR P1_ENRG_VOL
    Channel groups:
        Probe_1: channels P1_ENRG_PWR P1_ENRG_CURR P1_ENRG_VOL
    Supported configuration options across all channel groups:
        continuous: 
        limit_samples: 0 (current)
        limit_time: 0 (current)
        samplerate (1 Hz - 500 Hz in steps of 1 Hz)
    

    The driver has four parameters (continuous sampling, sample limit, time limit and sample rate) and has one probe attached with three channels (PWR, CURR and VOL). The acquisition parameters help configuring data acquisition by giving sampling limits or rates. The rates are given in Hertz, and should be within the 1 and 500Hz range when using an ACME.

    For example, to sample at 20Hz and display the power consumption measured by our probe P1:

    # sigrok-cli --driver=baylibre-acme --channels=P1_ENRG_PWR \
          --continuous --config samplerate=20
    FRAME-BEGIN
    P1_ENRG_PWR: 1.000000 W
    FRAME-END
    FRAME-BEGIN
    P1_ENRG_PWR: 1.210000 W
    FRAME-END
    FRAME-BEGIN
    P1_ENRG_PWR: 1.210000 W
    FRAME-END
    

    Of course there are many more options as shown in the Sigrok CLI manual.

    Beta image

    A new image is being developed and will change the way to use the ACME. As it’s already available in beta we tested it (and didn’t come back to the stable image). This new version aims to only use IIO to provide the probes data, instead of having a custom Sigrok driver. The main advantage is many software are IIO aware, or will be, as it’s the standard way to use this kind of sensors with the Linux kernel. Last but not least, IIO provides ways to communicate over the network.

    A new webpage is available to find information on how to use the beta image, on https://baylibre-acme.github.io. This image isn’t compatible with the current stable one, which we previously described.

    The first nice thing to notice when using the beta image is the Bonjour support which helps us communicating with the board in an effortless way:

    $ ping baylibre-acme.local
    

    A new tool, acme-cli, is provided to control the probes to switch them on or off given the needs. To switch on or off the first probe:

    $ ./acme-cli switch_on 1
    $ ./acme-cli switch_off 1
    

    We do not need any additional custom software to use the board, as the sensors data is available using the IIO interface. This means we should be able to use any IIO aware tool to gather the power consumption values:

    • Sigrok, on the laptop/machine this time as IIO is able to communicate over the network;
    • libiio/examples, which provides the iio-monitor tool;
    • iio-capture, which is a fork of iio-readdev designed by BayLibre for an integration into LAVA (automated tests);
    • and many more..

    Conclusion

    We didn’t use all the possibilities offered by the ACME cape yet but so far it helped us a lot when working on power consumption related topics. The ACME cape is simple to use and comes with a working pre-built image. The beta image offers the IIO support which improved the usability of the device, and even though it’s in a beta version we would recommend to use it.

    by Antoine Ténart at February 13, 2017 03:38 PM

    February 12, 2017

    Harald Welte

    Towards a real SIGTRAN/SS7 stack in libosmo-sigtran

    In the good old days ever since the late 1980ies - and a surprising amount even still today - telecom signaling traffic is still carried over circuit-switched SS7 with its TDM lines as physical layer, and not an IP/Ethernet based transport.

    When Holger first created OsmoBSC, the BSC-only version of OpenBSC some 7-8 years ago, he needed to implement a minimal subset of SCCP wrapped in TCP called SCCP Lite. This was due to the simple fact that the MSC to which it should operate implemented this non-standard protocol stacking that was developed + deployed before the IETF SIGTRAN WG specified M3UA or SUA came around. But even after those were specified in 2004, the 3GPP didn't specify how to carry A over IP in a standard way until the end of 2008, when a first A interface over IP study was released.

    As time passese, more modern MSCs of course still implement classic circuit-switched SS7, but appear to have dropped SCCPlite in favor of real AoIP as specified by 3GPP meanwhile. So it's time to add this to the osmocom universe and OsmoBSC.

    A couple of years ago (2010-2013) implemented both classic SS7 (MTP2/MTP3/SCCP) as well as SIGTRAN stackings (M2PA/M2UA/M3UA/SUA in Erlang. The result has been used in some production deployments, but only with a relatively limited feature set. Unfortunately, this code has nto received any contributions in the time since, and I have to say that as an open source community project, it has failed. Also, while Erlang might be fine for core network equipment, running it on a BSC really is overkill. Keep in miond that we often run OpenBSC on really small ARM926EJS based embedded systems, much more resource constrained than any single smartphone during the late decade.

    In the meantime (2015/2016) we also implemented some minimal SUA support for interfacing with UMTS femto/small cells via Iuh (see OsmoHNBGW).

    So in order to proceed to implement the required SCCP-over-M3UA-over-SCTP stacking, I originally thought well, take Holgers old SCCP code, remove it from the IPA multiplex below, stack it on top of a new M3UA codebase that is copied partially from SUA.

    However, this falls short of the goals in several ways:

    • The application shouldn't care whether it runs on top of SUA or SCCP, it should use a unified interface towards the SCCP Provider. OsmoHNBGW and the SUA code already introduce such an interface baed on the SCCP-User-SAP implemented using Osmocom primitives (osmo_prim). However, the old OsmoBSC/SCCPlite code doesn't have such abstraction.
    • The code should be modular and reusable for other SIGTRAN stackings as required in the future

    So I found myself sketching out what needs to be done and I ended up pretty much with a re-implementation of large parts. Not quite fun, but definitely worth it.

    The strategy is:

    And then finally stack all those bits on top of each other, rendering a fairly clean and modern implementation that can be used with the IuCS of the virtually unmodified OsmmoHNBGW, OsmoCSCN and OsmoSGSN for testing.

    Next steps in the direction of the AoIP are:

    • Implementation of the MTP-SAP based on the IPA transport
    • Binding the new SCCP code on top of that
    • Converting OsmoBSC code base to use the SCCP-User-SAP for its signaling connection

    From that point onwards, OsmoBSC doesn't care anymore whether it transports the BSSAP/BSSMAP messages of the A interface over SCCP/IPA/TCP/IP (SCCPlite) SCCP/M3UA/SCTP/IP (3GPP AoIP), or even something like SUA/SCTP/IP.

    However, the 3GPP AoIP specs (unlike SCCPlite) actually modify the BSSAP/BSSMAP payload. Rather than using Circuit Identifier Codes and then mapping the CICs to UDP ports based on some secret conventions, they actually encapsulate the IP address and UDP port information for the RTP streams. This is of course the cleaner and more flexible approach, but it means we'll have to do some further changes inside the actual BSC code to accommodate this.

    by Harald Welte at February 12, 2017 11:00 PM

    February 11, 2017

    Harald Welte

    Testing (not only) telecom protocols

    When implementing any kind of communication protocol, one always dreams of some existing test suite that one can simply run against the implementation to check if it performs correct in at least those use cases that matter to the given application.

    Of course in the real world, there rarely are protocols where this is true. If test specifications exist at all, they are often just very abstract texts for human consumption that you as the reader should implement yourself.

    For some (by far not all) of the protocols found in cellular networks, every so often I have seen some formal/abstract machine-parseable test specifications. Sometimes it was TTCN-2, and sometimes TTCN-3.

    If you haven't heard about TTCN-3, it is basically a way to create functional tests in an abstract description (textual + graphical), and then compile that into an actual executable tests suite that you can run against the implementation under test.

    However, when I last did some research into this several years ago, I couldn't find any Free / Open Source tools to actually use those formally specified test suites. This is not a big surprise, as even much more fundamental tools for many telecom protocols are missing, such as good/complete ASN.1 compilers, or even CSN.1 compilers.

    To my big surprise I now discovered that Ericsson had released their (formerly internal) TITAN TTCN3 Toolset as Free / Open Source Software under EPL 1.0. The project is even part of the Eclipse Foundation. Now I'm certainly not a friend of Java or Eclipse by all means, but well, for running tests I'd certainly not complain.

    The project also doesn't seem like it was a one-time code-drop but seems very active with many repositories on gitub. For example for the core module, titan.core shows plenty of activity on an almost daily basis. Also, binary releases for a variety of distributions are made available. They even have a video showing the installation ;)

    If you're curious about TTCN-3 and TITAN, Ericsson also have made available a great 200+ pages slide set about TTCN-3 and TITAN.

    I haven't yet had time to play with it, but it definitely is rather high on my TODO list to try.

    ETSI provides a couple of test suites in TTCN-3 for protocols like DIAMETER, GTP2-C, DMR, IPv6, S1AP, LTE-NAS, 6LoWPAN, SIP, and others at http://forge.etsi.org/websvn/ (It's also the first time I've seen that ETSI has a SVN server. Everyone else is using git these days, but yes, revision control systems rather than periodic ZIP files is definitely a big progress. They should do that for their reference codecs and ASN.1 files, too.

    I'm not sure once I'll get around to it. Sadly, there is no TTCN-3 for SCCP, SUA, M3UA or any SIGTRAN related stuff, otherwise I would want to try it right away. But it definitely seems like a very interesting technology (and tool).

    by Harald Welte at February 11, 2017 11:00 PM

    February 10, 2017

    Harald Welte

    FOSDEM 2017

    Last weekend I had the pleasure of attending FOSDEM 2017. For many years, it is probably the most exciting event exclusively on Free Software to attend every year.

    My personal highlights (next to meeting plenty of old and new friends) in terms of the talks were:

    I was attending but not so excited by Georg Greve's OpenPOWER talk. It was a great talk, and it is an important topic, but the engineer in me would have hoped for some actual beefy technical stuff. But well, I was just not the right audience. I had heard about OpenPOWER quite some time ago and have been following it from a distance.

    The LoRaWAN talk couldn't have been any less technical, despite stating technical, political and cultural in the topic. But then, well, just recently 33C3 had the most exciting LoRa PHY Reverse Engineering Talk by Matt Knight.

    Other talks whose recordings I still want to watch one of these days:

    by Harald Welte at February 10, 2017 11:00 PM

    February 05, 2017

    Andrew Zonenberg, Silicon Exposed

    STARSHIPRAIDER: Input buffer rev 0.1 design and characterization

    Working as an embedded systems pentester is a lot of fun, but it comes with some annoying problems. There's so many tools that I can never seem to find the right one. Need to talk to a 3.3V UART? I almost invariably have an FTDI cable configured for 5 or 1.8V on my desk instead. Need to dump a 1.8V flash chip? Most of our flash dumpers won't run below 3.3. Need to sniff a high-speed bus? Most of the Saleae Logic analyzers floating around the lab are too slow to keep up with fast signals, and the nice oscilloscopes don't have a lot of channels. And everyone's favorite jack-of-all-trades tool, the Bus Pirate, is infamous for being slow.

    As someone with no shortage of virtual razors, I decided that this yak needed to be shaved! The result was an ongoing project I call STARSHIPRAIDER. There will be more posts on the project in the coming months so stay tuned!

    The first step was to decide on a series of requirements for the project:
    • 32 bidirectional I/O ports split into four 8-pin banks.
      This is enough to sniff any commonly encountered embedded bus other than DRAM. Multiple banks are needed to support multiple voltage levels in the same target.
    • Full support for 1.2 to 5V logic levels.This is supposed to be a "Swiss Army knife" embedded systems debug/testing tool. This voltage range encompasses pretty much any signalling voltage commonly encountered in embedded devices.
    • Tolerance to +/- 12V DC levels.Test equipment needs to handle some level of abuse. When you're reverse engineering a board it's easy to hook up ground to the wrong signal, probe a power rail, or even do both at once. The device doesn't have to function in this state (shutting down for protection is OK) but needs to not suffer permanent damage. It's also OK if the protection doesn't handle AC sources - the odds of accidentally connecting a piece of digital test equipment to a big RF power amplifier are low enough that I'm not worried.
    • 500 Mbps input/output rate for each pin.This was a somewhat arbitrary choice, but preliminary math indicated it was feasible. I wanted something significantly faster than existing tools in the class.
    • Ethernet-based interface to host PC.I've become a huge fan of Ethernet and IPv6 as communications interface for my projects. It doesn't require any royalties or license fees, scales from 10 Mbps to >10 Gbps and supports bridging between different link speeds, supports multi-master topologies, and can be bridged over a WAN or VPN. USB and PCIe, the two main alternatives, can do few if any of these.
    • Large data buffer.Most USB logic analyzers have very high peak capture rates, but the back-haul interface to the host PC can't keep up with extended captures at high speed. Commodity DRAM is so cheap that there's no reason to not stick a whole SODIMM of DDR3 in the instrument to provide an extremely deep capture buffer.
    • Multiple virtual instruments connected to a crossbar.Any nontrivial embedded device contains multiple buses of interest to a reverse engineer. STARSHIPRAIDER needs to be able to connect to several at once (on arbitrary pins), bridge them out to separate TCP ports, and allow multiple testers to send test vectors to them independently.
    The brain of the system will be fairly straightforward high-speed digital. It will be a 6-8 layer PCB with an Artix-7 FPGA in FGG484 package, a SODIMM socket for 4GB of DDR3 800, a KSZ9031 Gigabit Ethernet PHY, a TLK10232 10gbit Ethernet PHY, and a SFP+ cage, plus some sort of connector (most likely a Samtec Q-strip) for talking to the I/O subsystem on a separate board.

    The challenging part of the design, from an architectural perspective, seemed to be the I/O buffer and input protection circuit, so I decided to prototype it first.

    STARSHIPRAIDER v0.1 I/O buffer design

    A block diagram of the initial buffer design is shown above. The output buffer will be discussed in a separate post once I've had a chance to test it; today we'll be focusing on the input stage (the top half of the diagram).

    During normal operation, the protection relay is closed. The series resistor has insignificant resistance compared to the input impedance of the comparator (an ADCMP607), so it can be largely ignored. The comparator checks the input signal against a threshold (chosen appropriately for the I/O standard in use) and sends a differential signal to the host board for processing. But what if something goes wrong?

    If the user accidentally connects the probe to a signal outside the acceptable voltage range, a Schottky diode connected to the +5V or ground rail will conduct and shunt the excess voltage safely into the power rails. The series resistor limits fault current to a safe level (below the diode's peak power rating). After a short time (about 150 µs with my current relay driver), the protection relay opens and breaks the circuit.

    The relay is controlled by a Silego GreenPAK4 mixed-signal FPGA, running a small design written in Verilog and compiled with my open-source toolchain. The code for the GreenPAK design is on Github.

    All well and good in theory... but does it work? I built a characterization board containing a single I/O buffer and loaded with test points and probe connectors. You can grab the KiCAD files for this on Github as well. Here's a quick pic after assembly:

    STARSHIPRAIDER I/O characterization board
    Initial test results were not encouraging. Positive overvoltage spikes were clamped to +8V and negative spikes were clamped to -1V - well outside the -0.5 to +6V absolute max range of my comparator.
    Positive transient response

    Negative transient response


    After a bit of review of the schematics, I found two errors. The "5V" ESD diode I was using to protect the high side had a poorly controlled Zener voltage and could clamp as high as 8V or 9V. The Schottky on the low side was able to survive my fault current but the forward voltage increased massively beyond the nominal value.

    I reworked the board to replace the series resistor with a larger one (39 ohms) to reduce the maximum fault current, replaced the low-side Schottky with one that could handle more current, and replaced the Zener with an identical Schottky clamping to the +5V rail.

    Testing this version gave much better results. There was still a small amount of ringing (less than five nanoseconds) a few hundred mV past the limit, but the comparator's ESD diodes should be able to safely dissipate this brief pulse.

    Positive transient response, after rework
    Negative transient response, after rework
    Now it was time to test the actual signal path. My first iteration of the test involved cobbling together a signal path from an FPGA board through the test platform and to the oscilloscope without any termination. The source of the signal was a BNC-to-minigrabber flying lead test clip! Needless to say, results were less than stellar.

    PRBS31 eye at 80 Mbps through protection circuit with flying leads and no terminator
    After ordering some proper RF test supplies (like an inline 50 ohm BNC terminator), I got much better signal quality. The eye was very sharp and clear at 100 Mbps. It was visibly rounded at 200 Mbps, but rendering a squarewave at that rate requires bandwith much higher than the 100 MHz of my oscilloscope so results were inconclusive.

    PRBS31 eye at 100 Mbps through protection circuit with proper cabling
    PRBS31 eye at 200 Mbps, limited by oscilloscope bandwidth
    I then hooked the protection circuit up to the comparator to test the entire inbound signal chain. While the eye looked pretty good at 100 Mbps (plotting one leg of the differential since my scope was out of channels), at 200 Mbps horrible jitter appeared.

    PRBS31 eye at 100 Mbps through full input buffer
    PRBS31 eye at 200 Mbps through full input buffer
    After quite a bit of scratching my head and fumbling with datasheets, I realized my oscilloscope was the problem by plotting the clock reference I was triggering on. The jitter was visible in this clock as well, suggesting that it was inherent in the oscilloscope's trigger circuit. This isn't too surprising considering I'm really pushing the limits of this scope - I need a better one to do this kind of testing properly.

    PRBS31 eye at 200 Mbps plus 200 MHz sync clock
     At this point I've done about all of the input stage testing I can do with this oscilloscope. I'm going to try and rig up a BER tester on the FPGA so I can do PRBS loopback through the protection stage and comparator at higher speeds, then repeat for the output buffer and the protection run in the opposite direction.

    I still have more work to do on the protection circuit as well... while it's fine at 100 Mbps, the 2x 10pF Schottky diode parasitic capacitance is seriously degrading my rise times (I calculated an RC filter -3dB point of around 200 MHz, so higher harmonics are being chopped off). I have some ideas on how I can cut this down much less but that will require a board respin and another blog post!

    by Andrew Zonenberg (noreply@blogger.com) at February 05, 2017 10:20 AM

    February 03, 2017

    Free Electrons

    Video and slides from Linux Conf Australia

    Linux Conf Australia took place two weeks ago in Hobart, Tasmania. For the second time, a Free Electrons engineer gave a talk at this conference: for this edition, Free Electrons CTO Thomas Petazzoni did a talk titled A tour of the ARM architecture and its Linux support. This talk was intended as an introduction-level talk to explain what is ARM, what is the concept behind the ARM architecture and ARM System-on-chip, bootloaders typically used on ARM and the Linux support for ARM with the concept of Device Tree.

    The slides of the talk are available in PDF format, and the video is available on Youtube. We got some nice feedback afterwards, which is a good indication a number of attendees found it informative.

    All the videos from the different talks are also available on Youtube.

    We once again found LCA to be a really great event, and want to thank the LCA organization for accepting our talk proposal and funding the travel expenses. Next year LCA, in 2018, will take place in Sydney, in mainland Australia.

    by Thomas Petazzoni at February 03, 2017 01:03 PM

    February 02, 2017

    Free Electrons

    Free Electrons at FOSDEM and the Buildroot Developers Meeting

    FOSDEM 2017Like every year, a number of Free Electrons engineers will be attending the FOSDEM conference next week-end, on February 4 and 5, in Brussels. This year, Mylène Josserand and Thomas Petazzoni are going to FOSDEM. Being the biggest European open-source conference, FOSDEM is a great opportunity to meet a large number of open-source developers and learn about new projects.

    In addition, Free Electrons is sponsoring the participation of Thomas Petazzoni to the Buildroot Developers meeting, which takes place during two days right after the FOSDEM conference. During this event, the Buildroot developers community gathers to make progress on the project by having discussions on the current topics, and working on the patches that have been submitted and need to be reviewed and merged.

    by Thomas Petazzoni at February 02, 2017 01:28 PM

    Free Electrons at the Embedded Linux Conference 2017

    The next Embedded Linux Conference will take place later this month in Portland (US), from February 21 to 23, with a great schedule of talks. As usual, a number of Free Electrons engineers will attend this event, and we will also be giving a few talks.

    Embedded Linux Conference 2017

    Free Electrons CEO Michael Opdenacker will deliver a talk on Embedded Linux size reduction techniques, while Free Electrons engineer Quentin Schulz will give a talk on
    Power Management Integrated Circuits: Keep the Power in Your Hands
    . In addition, Free Electrons engineers Maxime Ripard, Antoine Ténart and Mylène Josserand will be attending the conference.

    We once again look forward to meeting our fellow members of the embedded Linux and Linux kernel communities!

    by Thomas Petazzoni at February 02, 2017 08:46 AM

    January 31, 2017

    Harald Welte

    Osmocom Conference 2017 on April 21st

    I'm very happy that in 2017, we will have the first ever technical conference on the Osmocom cellular infrastructure projects.

    For many years, we have had a small, invitation only event by Osmocom developers for Osmocom developers called OsmoDevCon. This was fine for the early years of Osmocom, but during the last few years it became apparent that we also need a public event for our many users. Those range from commercial cellular operators to community based efforts like Rhizomatica, and of course include the many research/lab type users with whom we started.

    So now we'll have the public OsmoCon on April 21st, back-to-back with the invitation-only OsmoDevcon from April 22nd through 23rd.

    I'm hoping we can bring together a representative sample of our user base at OsmoCon 2017 in April. Looking forward to meet you all. I hope you're also curious to hear more from other users, and of course the development team.

    Regards,
    Harald

    by Harald Welte at January 31, 2017 11:00 PM

    January 30, 2017

    Bunnie Studios

    Name that Ware January 2017

    The Ware for January 2017 is shown below:

    This close-up view shows about a third of the circuit board. If it turns out to be too difficult to guess from the clues shown here, I’ll update this post with a full-board photo; but I have a feeling long-time players of Name that Ware might have too easy a time with this one.

    by bunnie at January 30, 2017 12:30 PM

    Winner, Name that Ware December 2016

    The ware for December 2016 is a diaper making machine. The same machine can be configured for making sanitary napkins or diapers by swapping out the die cut rollers and base material; in fact, the line next to the one pictured was producing sanitary napkins at the time this photo was taken. Congrats to Stuart for the first correct guess, email me for your prize!

    by bunnie at January 30, 2017 12:30 PM

    January 26, 2017

    ZeptoBARS

    Analog Devices AD584 - precision voltage reference : weekend die-shot

    Analog Devices AD584 is a voltage reference with 4 outputs : 2.5, 5, 7.5 and 10V. Tempco is laser trimmed to 15ppm/°C and voltage error to ~0.1%.


    Die size 2236x1570 µm.

    One can refer to a die photo from AD datasheet showing a bit older design of the same chip:

    January 26, 2017 09:01 PM

    January 22, 2017

    Harald Welte

    Autodesk: How to lose loyal EAGLE customers

    A few days ago, Autodesk has announecd that the popular EAGLE electronics design automation (EDA) software is moving to a subscription based model.

    When previously you paid once for a license and could use that version/license as long as you wanted, there now is a monthly subscription fee. Once you stop paying, you loose the right to use the software. Welcome to the brave new world.

    I have remotely observed this subscription model as a general trend in the proprietary software universe. So far it hasn't affected me at all, as the only two proprietary applications I use on a regular basis during the last decade are IDA and EAGLE.

    I already have ethical issues with using non-free software, but those two cases have been the exceptions, in order to get to the productivity required by the job. While I can somehow convince my consciousness in those two cases that it's OK - using software under a subscription model is completely out of the question, period. Not only would I end up paying for the rest of my professional career in order to be able to open and maintain old design files, but I would also have to accept software that "calls home" and has "remote kill" features. This is clearly not something I would ever want to use on any of my computers. Also, I don't want software to be associated with any account, and it's not the bloody business of the software maker to know when and where I use my software.

    For me - and I hope for many, many other EAGLE users - this move is utterly unacceptable and certainly marks the end of any business between the EAGLE makers and myself and/or my companies. I will happily use my current "old-style" EAGLE 7.x licenses for the near future, and theS see what kind of improvements I would need to contribute to KiCAD or other FOSS EDA software in order to eventually migrate to those.

    As expected, this doesn't only upset me, but many other customers, some of whom have been loyal to using EAGLE for many years if not decades, back to the DOS version. This is reflected by some media reports (like this one at hackaday or user posts at element14.com or eaglecentral.ca who are similarly critical of this move.

    Rest in Peace, EAGLE. I hope Autodesk gets what they deserve: A new influx of migrations away from EAGLE into the direction of Open Source EDA software like KiCAD.

    In fact, the more I think about it, I'm actually very much inclined to work on good FOSS migration tools / converters - not only for my own use, but to help more people move away from EAGLE. It's not that I don't have enough projects at my hand already, but at least I'm motivated to do something about this betrayal by Autodesk. Let's see what (if any) will come out of this.

    So let's see it that way: What Autodesk is doing is raising the level off pain of using EAGLE so high that more people will use and contribute FOSS EDA software. And that is actually a good thing!

    by Harald Welte at January 22, 2017 11:00 PM

    January 20, 2017

    Elphel

    Lapped MDCT-based image conditioning with optical aberrations correction, color conversion, edge emphasis and noise reduction

    Fig.1. Image comparison of the different processing stages output

    Results of the processing of the color image

    Previous blog post “Lens aberration correction with the lapped MDCT” described our experiments with the lapped MDCT[1] for optical aberration corrections of a single color channel and separation of the asymmetrical kernel into a small asymmetrical part for direct convolution and a larger symmetrical one to be applied in the frequency domain of the MDCT. We supplemented this processing chain with additional steps of the image conditioning to evaluate the overall quality of the of the results and feasibility of the MDCT approach for processing in the camera FPGA.

    Image comparator in Fig.1 allows to see the difference between the images generated from the results of the several stages of the processing. It makes possible to compare any two of the image layers by either sliding the image separator or by just clicking on the image – that alternates right/left images. Zoom is controlled by the scroll wheel (click on the zoom indicator fits image), pan – by dragging.

    Original image was acquired with Elphel model 393 camera with 5 Mpix MT9P006 image sensor and Sunex DSL227 fisheye lens, saved in jp4 format as a raw Bayer data at 98% compression quality. Calibration was performed with the Java program using calibration pattern visible in the image itself. The program is designed to work with the low-distortion lenses so fisheye was a stretch and the calibration kernels near the edges are just replicated from the ones closer to the center, so aberration correction is only partial in those areas.

    First two layers differ just by added annotations, they both show output of a simple bilinear demosaic processing, same as generated by the camera when running in JPEG mode. Next layers show different stages of the processing, details are provided later in this blog post.

    Linear part of the image conditioning: convolution and color conversion

    Correction of the optical aberrations in the image can be viewed as convolution of the raw image array with the space-variant kernels derived from the optical point spread functions (PSF). In the general case of the true space-variant kernels (different for each pixel) it is not possible to use DFT-based convolution, but when the kernels change slowly and the image tiles can be considered isoplanatic (areas where PSF remains the same to the specified precision) it is possible to apply the same kernel to the whole image tile that is processed with DFT (or combined convolution/MDCT in our case). Such approach is studied in deep for astronomy [2],[3] (where they almost always have plenty of δ-function light sources to measure PSF in the field of view :-)).

    The procedure described below is a combination of the sparse kernel convolution in the space domain with the lapped MDCT processing making use of its perfect (only approximate with the variant kernels) reconstruction property, but it still implements the same convolution with the variant kernels.

    Signal flow is presented in Fig.2. Input signal is the raw image data from the sensor sampled through the color filter array organized as a standard Bayer mosaic: each 2×2 pixel tile includes one of the red and blue samples, and 2 of the green ones.

    In addition to the image data the process depends on the calibration data – pairs of asymmetrical and symmetrical kernels calculated during camera calibration as described in the previous blog post.

    Fig.2. Signal flow for linear part of MDCT-based aberration correction

    Fig.2. Signal flow of the linear part of MDCT-based image conditioning

    Image data is processed in the following sequence of the linear operations, resulting in intensity (Y) and two color difference components:

    1. Input composite signal is split by colors into 3 separate channels producing sparse data in each.
    2. Each channel data is directly convolved with a small (we used just four non-zero elements) asymmetrical kernel AK, resulting in a sequence of 16×16 pixel tiles, overlapping by 8 pixels (input pixels are not limited to 16×16 tiles).
    3. Each tile is multiplied by a window function, folded and converted with 8×8 pixel DCT-IV[4] – equivalent of the 16×16->8×8 MDCT.
    4. 8×8 result tiles are multiplied by symmetrical kernels (SK) – equivalent of convolving the pre-MDCT signal.
    5. Each channel is subject to the low-pass filter that is implemented by multiplying in the frequency domain as these filters are indeed symmetrical. The cutoff frequency is different for the green (LPF1) and other (LPF2) colors as there are more source samples for the first. That was the last step before inverse transformation presented in the previous blog post, now we continued with a few more.
    6. Natural images have strong correlation between different color channels so most image processing (and compression) algorithms involve converting the pure color channels into intensity (Y) and two color difference signals that have lower bandwidth than intensity. There are different standards for the color conversion coefficients and here we are free to use any as this process is not a part of a matched encoder/decoder pair. All such conversions can be represented as a 3×3 matrix multiplication by the (r,g,b) vector.
    7. Two of the output signals – color differences are subject to an additional bandwidth limiting by LPF3.
    8. IMDCT includes 8×8 DCT-IV, unfolding 8×8 into 16×16 tiles, second multiplication by the window function and accumulation of the overlapping tiles in the pixel domain.

    Nonlinear image enhancement: edge emphasis, noise reduction

    For some applications the output data is already useful – ideally it has all the optical aberrations compensated so the remaining far-reaching inter-pixel correlation caused by a camera system is removed. Next steps (such as stereo matching) can be done on- (or off-) line, and the algorithms do not have to deal with the lens specifics. Other applications may benefit from additional processing that improves image quality – at least the perceived one.

    Such processing may target the following goals:

    1. To reduce remaining signal modulation caused by the Bayer pattern (each source pixel carries data about a single color component, not all 3), trying to remove it by a LPF would blur the image itself.
    2. Detect and enhance edges, as most useful high-frequency elements represent locally linear features
    3. Reduce visible noise in the uniform areas (such as blue sky) where significant (especially for the small-pixel sensors) noise originates from the shot noise of the pixels. This noise is amplified by the aberration correction that effectively increases the high frequency gain of the system.

    Some of these three goals overlap and can be addressed simultaneously – edge detection can improve de-mosaic quality and reduce related colored artifacts on the sharp edges if the signal is blurred along the edges and simultaneously sharpened in the orthogonal direction. Areas that do not have pronounced linear features are likely to be uniform and so can be low-pass filtered.

    The non-linear processing produces modified pixel value using 3×3 pixel array centered around the current pixel. This is a two-step process:

    • First the 3×3 center-symmetric matrices (one for Y, another for color) of coefficients are calculated using the Y channel data, then
    • they are applied to the Y and color components by replacing the pixel value with the inner product of the calculated coefficients and the original data.

    Signal flow for one channel is presented in Fig.3:

    Fig.3 Non-linear image processing: edge emphasis and noise reduction

    Fig.3 Non-linear image processing: edge emphasis and noise reduction

    1. Four inner products are calculated for the same 9-sample Y data and the shown matrices (corresponding to second derivatives along vertical, horizontal and the two diagonal directions).
    2. Each of these values is squared and
    3. the following four 3×3 matrices are multiplied by these values. Matrices are symmetrical around the center, so gray-colored cells do not need to be calculated.
    4. Four matrices are then added together and scaled by a variable parameter K1. The first two matrices are opposite to each other, and so are the second two. So if the absolute value of the two orthogonal second derivatives are equal (no linear features detected), the corresponding matrices will annihilate each other.
    5. A separate 3×3 matrix representing a weighted running average, scaled by K2 is added for noise reduction.
    6. The sum of the positive values is compared to a specified threshold value, and if it exceed it – all the matrix is proportionally scaled down – that makes different line directions to “compete” against each other and against the blurring kernel.
    7. The sum of all 9 elements of the calculated array is zero, so the default unity kernel is added and when correction coefficients are zeros, the result pixels will be the same as the input ones.
    8. Inner product of the calculated 9-element array and the input data is calculated and used as a new pixel value. Two of the arrays are created from the same Y channel data – one for Y and the other for two color differences, configurable parameters (K1, K2, threshold and the smoothing matrix) are independent in these two cases.

    Next steps

    How much is it possible to warp?

    The described method of the optical aberration correction is tested with the software implementation that uses only operations that can be ported to the FPGA, so we are almost ready to get back to to Verilog programming. One more thing to try before is to see if it is possible to merge this correction with a minor distortion correction. DFT and DCT transforms are not good at scaling the images (when using the same pixel grid). It is definitely not possible no rectify large areas of the fisheye images, but maybe small (fraction of a pixel per tile) stretching can still be absorbed in the same step with shifting? This may have several implications.

    Single-step image rectification

    It would be definitely attractive to eliminate additional processing step and save FPGA resources and/or decrease the processing time. But there is more than that – re-sampling degrades image resolution. For that reason we use half-pixel grid for the offline processing, but it increases amount of data 4 times and processing resources – at least 4 times also.

    When working with the whole pixel grid (as we plan to implement in the camera FPGA) we already deal with the partial pixel shifts during convolution for aberration correction, so it would be very attractive to combine these two fractional pixel shifts into one (calibration process uses half-pixel grid) and so to avoid double re-sampling and related image degrading.

    Using analytical lens distortion model with the precision of the pixel mapping

    Another goal that seems achievable is to absorb at least the table-based pixel mapping. Real lenses can only to some precision be described by an analytical formula of a radial distortion model. Each element can have errors and the multi-lens assembly can inevitably has some mis-alignments – all that makes the lenses different and deviate from a perfect symmetry of the radial model. When we were working with (rather low distortion) wide angle lenses Evetar N125B04530W we were able to get to 0.2-0.3 pix root mean square of the reprojection error in a 26-lens camera system when using radial distortion model with up to the 8-th power of the radial polynomial (insignificant improvement when going from 6-th to the 8-th power). That error was reduced to 0.05..0.07 pixels when we implemented table-based pixel mapping for the remaining (after radial model) distortions. The difference between one of the standard lens models – polynomial for the low-distortion ones and f-theta for fisheye and “f-theta” lenses (where angle from optical axis approximately linearly depends on the distance from the center in the focal plane) is rather small, so it is a good candidate to be absorbed by the convolution step. While this will not eliminate re-sampling when the image will be rectified, this distortion correction process will have a simple analytical formula (already supported by many programs) and will not require a full pixel mapping table.

    High resolution Z-axis (distance) measurement with stereo matching of multiple images

    Image rectification is an important precondition to perform correlation-based stereo matching of two or more images. When finding the correlation between the images of a relatively large and detailed object it is easy to get resolution of a small fraction of a pixel. And this proportionally increases the distance measurement precision for the same base (distance between the individual cameras). Among other things (such as mechanical and thermal stability of the system) this requires precise measurement of the sub-camera distortions over the overlapping field of view.

    When correlating multiple images the far objects (most challenging to get precise distance information) have low disparity values (may be just few pixels), so instead of the complete rectification of the individual images it may be sufficient to have a good “mutual rectification”, so the processed images of the object at infinity will match on each of the individual images with the same sub-pixel resolution as we achieved for off-line processing. This will require to mechanically orient each sub-camera sensor parallel to the others, point them in the same direction and preselect lenses for matching focal length. After that (when the mechanical match is in reasonable few percent range) – perform calibration and calculate the convolution kernels that will accommodate the remaining distortion variations of the sub-cameras. In this case application of the described correction procedure in the camera will result in the precisely matched images ready for correlation.

    These images will not be perfectly rectified, and measured disparity (in pixels) as well as the two (vertical and horizontal) angles to the object will require additional correction. But this X/Y resolution is much less critical than the resolution required for the Z-measurements and can easily tolerate some re-sampling errors. For example, if a car at a distance of 20 meters is viewed by a stereo camera with 100 mm base, then the same pixel error that corresponds to a (practically negligible) 10 mm horizontal shift will lead to a 2 meter error (10%) in the distance measurement.

    References

    [1] Malvar, Henrique S. Signal processing with lapped transforms. Artech House, 1992.

    [2] Thiébaut, Éric, et al. “Spatially variant PSF modeling and image deblurring.” SPIE Astronomical Telescopes+ Instrumentation. International Society for Optics and Photonics, 2016. pdf

    [3] Řeřábek, M., and P. Pata. “The space variant PSF for deconvolution of wide-field astronomical images.” SPIE Astronomical Telescopes+ Instrumentation. International Society for Optics and Photonics, 2008.pdf

    [4] Britanak, Vladimir, Patrick C. Yip, and Kamisetty Ramamohan Rao. Discrete cosine and sine transforms: general properties, fast algorithms and integer approximations. Academic Press, 2010.

    by Andrey Filippov at January 20, 2017 04:55 AM

    January 16, 2017

    ZeptoBARS

    Vishay TSOP4838 - IR receiver module : weekend die-shot

    Vishay TSOP4838 - decodes IR commands sent with 38kHz modulation. This modulation (and narrow-band filter on receiver module) is required to eliminate ambient light sources which could flicker somewhere at 50-100Hz or 20-30kHz (bad CFL/LEDs). Black (IR transparent) plastic also helps with background noise.


    Die size 590x594 µm.

    Photodiode is on a separate die:

    Die size 1471x1471 µm

    January 16, 2017 02:37 PM

    January 08, 2017

    Altus Metrum

    embedded-arm-libc

    Finding a Libc for tiny embedded ARM systems

    You'd think this problem would have been solved a long time ago. All I wanted was a C library to use in small embedded systems -- those with a few kB of flash and even fewer kB of RAM.

    Small system requirements

    A small embedded system has a different balance of needs:

    • Stack space is limited. Each thread needs a separate stack, and it's pretty hard to move them around. I'd like to be able to reliably run with less than 512 bytes of stack.

    • Dynamic memory allocation should be optional. I don't like using malloc on a small device because failure is likely and usually hard to recover from. Just make the linker tell me if the program is going to fit or not.

    • Stdio doesn't have to be awesomely fast. Most of our devices communicate over full-speed USB, which maxes out at about 1MB/sec. A stdio setup designed to write to the page cache at memory speeds is over-designed, and likely involves lots of buffering and fancy code.

    • Everything else should be fast. A small CPU may run at only 20-100MHz, so it's reasonable to ask for optimized code. They also have very fast RAM, so cycle counts through the library matter.

    Available small C libraries

    I've looked at:

    • μClibc. This targets embedded Linux systems, and also appears dead at this time.

    • musl libc. A more lively project; still, definitely targets systems with a real Linux kernel.

    • dietlibc. Hasn't seen any activity for the last three years, and it isn't really targeting tiny machines.

    • newlib. This seems like the 'normal' embedded C library, but it expects a fairly complete "kernel" API and the stdio bits use malloc.

    • avr-libc. This has lots of Atmel assembly language, but is otherwise ideal for tiny systems.

    • pdclib. This one focuses on small source size and portability.

    Current AltOS C library

    We've been using pdclib for a couple of years. It was easy to get running, but it really doesn't match what we need. In particular, it uses a lot of stack space in the stdio implementation as there's an additional layer of abstraction that isn't necessary. In addition, pdclib doesn't include a math library, so I've had to 'borrow' code from other places where necessary. I've wanted to switch for a while, but there didn't seem to be a great alternative.

    What's wrong with newlib?

    The "obvious" embedded C library is newlib. Designed for embedded systems with a nice way to avoid needing a 'real' kernel underneath, newlib has a lot going for it. Most of the functions have a good balance between speed and size, and many of them even offer two implementations depending on what trade-off you need. Plus, the build system 'just works' on multi-lib targets like the family of cortex-m parts.

    The big problem with newlib is the stdio code. It absolutely requires dynamic memory allocation and the amount of code necessary for 'printf' is larger than the flash space on many of our devices. I was able to get a cortex-m3 application compiled in 41kB of code, and that used a smattering of string/memory functions and printf.

    How about avr libc?

    The Atmel world has it pretty good -- avr-libc is small and highly optimized for atmel's 8-bit avr processors. I've used this library with success in a number of projects, although nothing we've ever sold through Altus Metrum.

    In particular, the stdio implementation is quite nice -- a 'FILE' is effectively a struct containing pointers to putc/getc functions. The library does no buffering at all. And it's tiny -- the printf code lacks a lot of the fancy new stuff, which saves a pile of space.

    However, much of the places where performance is critical are written in assembly language, making it pretty darn hard to port to another processor.

    Mixing code together for fun and profit!

    Today, I decided to try an experiment to see what would happen if I used the avr-libc stdio bits within the newlib environment. There were only three functions written in assembly language, two of them were just stubs while the third was a simple ultoa function with a weird interface. With those coded up in C, I managed to get them wedged into newlib.

    Figuring out the newlib build system was the only real challenge; it's pretty awful having generated files in the repository and a mix of autoconf 2.64 and 2.68 version dependencies.

    The result is pretty usable though; my STM 32L discovery board demo application is only 14kB of flash while the original newlib stdio bits needed 42kB and that was still missing all of the 'syscalls', like read, write and sbrk.

    Here's gitweb pointing at the top of the tiny-stdio tree:

    gitweb

    And, of course you can check out the whole thing

    git clone git://keithp.com/git/newlib
    

    'master' remains a plain upstream tree, although I do have a fix on that branch. The new code is all on the tiny-stdio branch.

    I'll post a note on the newlib mailing list once I've managed to subscribe and see if there is interest in making this option available in the upstream newlib releases. If so, I'll see what might make sense for the Debian libnewlib-arm-none-eabi packages.

    by keithp's rocket blog at January 08, 2017 07:32 AM

    Elphel

    Lens aberration correction with the lapped MDCT

    Modern small-pixel image sensors exceed resolution of the lenses, so it is the optics of the camera, not the raw sensor “megapixels” that define how sharp are the images, especially in the off-center areas. Multi-sensor camera systems that depend on the tiled images do not have any center areas, so overall system resolution may be as low as that of is its worst part.

    Fig. 1. Lateral chromatic aberration and Bayer mosaic: a) monochrome (green) PSF, b) composite color PSF, c) Bayer mosaic of the sensor (direction of aberration shown), d) distorted mosaic matching chromatic aberration in b).

    Fig. 1. Lateral chromatic aberration and Bayer mosaic: a) monochrome (green) PSF, b) composite color PSF, c) Bayer mosaic of the sensor, d) distorted mosaic for the chromatic aberration of b).

    De-mosaic processing and chromatic aberrations

    Our current cameras role is to preserve the raw sensor data while providing some moderate compression, all the image correction is applied during post-processing. Handling the lens aberration has to be done before color conversion (or de-mosaicing). When converting Bayer data to color images most cameras start with the calculation of the “missing” colors in the RG/GB pattern using 3×3 or 5×5 kernels, this procedure relies on the specific arrangement of the color filters.

    Each of the red and blue pixels has 4 green ones at the same distance (pixel pitch) and 4 of the opposite (R for B and B for R) color at the equidistant diagonal locations. Fig.1. shows how lateral chromatic aberration disturbs these relations.

    Fig.1a is the point-spread function (PSF) of the green channel of the sensor. The resolution of the PSF measurement is twice higher than the pixel pitch, so the lens is not that bad – horizontal distance between the 2 greens in Fig.1c corresponds to 4 pixels of Fig.1a. It is also clearly visible that the PSF is elongated and the radial resolution in this part of the image is better than the tangential one (lens center is left-down).

    Fig.1b shows superposition of the 3 color channels: blue center is shifted up-and-right by approximately 2 PSF pixels (so one actual pixel period of the sensor) and the red one – half-pixel left-and-down from the green center. So the point light of a star, centered around some green pixel will not just spread uniformly to the two “R”s and two “B”s shown connected with lines in Fig.1c, but the other ones and in different order. Fig.1d illustrates the effective positions of the sensor pixels that match the lens aberration.

    Aberrations correction at post-processing stage

    When we perform off-line image correction we start with separating each color channel and re-sampling it at twice the pixel pitch frequency (adding zero sample between each measured one) – this increase allows to shift image by a fraction of a pixel both preserving resolution and not introducing the phase errors that may be visually OK but hurt when relying on sub-pixel resolution during correlation of images.

    Next is the conversion of the full image into the overlapping square tiles to the frequency domain using 2-d DFT, then multiplication by the inverted PSF kernels – individual for each color channel and each part of the whole image (calibration procedure provides a 2-d array of PSF kernels). Such multiplication in the frequency domain is equivalent to (much more computationally expensive) image convolution (or deconvolution as the desired result is to reduce the effect of the convolution of the ideal image with the PSF of the actual lens). This is possible because of the famous convolution-multiplication property of Fourier transform and its discrete versions.

    After each color channel tile is corrected and the phases of color components match (lateral chromatic aberration is compensated) it is the time when the data may be subject to non-linear processing that relies on the properties of the images (like detection of lines and edges) to combine the color channels trying to achieve highest spacial resolution and not to introduce color artifacts. Our current software performs it while data is in the frequency domain, before the inverse Fourier transform and merging the lapped tiles to the restored image.

    Fig.2. Histogram of difference between the original image and the one after direct and inverse MDCT (with 8x8 pixels DCT-IV)

    Fig.2. Histogram of difference between original image and after direct and inverse MDCT (with 8×8 pixels DCT-IV)

    MDCT of an image – there and back again

    It would be very appealing to use DCT-based MDCT instead of DFT for aberration correction. With just 8×8 point DCT-IV it may be possible to calculate direct 16×16 -> 8×8 MDCT and 8×8 -> 16×16 IMDCT providing perfect reconstruction of the image. 8×8 pixels DCT should be able to handle convolution kernels with 8 pixel radius – same would require 16×16 pixels DFT. I knew there will be a challenge to handle non-symmetrical kernels but first I gave a try to a 2-d MDCT to convert and reconstruct back a camera image that way. I was not able to find an efficient Java implementation of the DCT-IV so I had to write some code following the algorithms presented in [1].

    That worked nicely – when I obtained a histogram of the difference between the original image (pixel values were in the range of 0 to 255) and the restored one – IMDCT(MDCT(original)) it demonstrated negligible error. Of course I had to discard 8 pixel border of the image added by replication before the procedure – these border pixels do not belong to 4 overlapping tiles as all internal ones and so can not be reconstructed.

    When this will be done in the camera FPGA the error will be higher – DCT implementation there uses just an integer DSP – not capable of the double precision calculations as the Java code. But for the small 8×8 transformations it should be rather easy to manage calculation precision to the required level.

    Convolution with MDCT

    It was also easy to implement a low-pass symmetrical filter by multiplying 8×8 pixel MDCT output tiles by a DCT-III transform of the desired convolution kernel. To convolve f ☼ g you need to multiply DCT_IV(f) by DCT_III(g) in the transform domain [2], but that does not mean that DCT-III has also be implemented in the FPGA – the de-convolution kernels can be prepared during off-line calibration and provided to the camera in the required form.

    But not much more can be done for the convolution with asymmetric kernels – they either require additional DST (so DCT and DST) of the image and/or padding data with extra zeros [3],[4] – all that reduces advantage of DCT compared to DFT. Asymmetric kernels are required for the lens aberration corrections and Fig.1 shows two cases not easily suitable for MDCT:

    • lateral chromatic aberrations (or just shift in the image domain) – Fig.1b and
    • “diagonal” kernels (Fig.1a) – not an even function of each of the vertical and horizontal axes.
    Fig.3. Convolution kernel factorization: a) required asymmetrical and shifted kernel, b) 4-point direct convolution with (sparse) Bayer color channel data, c) symmetric convolution kernel for MDCT, d) symmetric kernel (DCT-III of c) ) to multiply DCT-IV kernels of the image

    Fig.3. Convolution kernel factorization: a) required asymmetrical and shifted kernel, b) 4-point direct convolution with (sparse) Bayer color channel data, c) symmetric convolution kernel for MDCT, d) symmetric kernel – DCT-III of c) to multiply DCT-IV kernels of the image.

    Symmetric kernels are like what you can do with a twice folded piece of paper, cut to some shape and unfolded, with folds oriented strictly vertically and horizontally.

    Factorization of the convolution

    Another way to handle convolution with non-symmetrical kernels is to split it in two – first convolve with an asymmetrical one directly and then – use MDCT and symmetrical kernel. The input data for combined convolution is split Bayer data, so each color channel receives sparse sequence – green one has only half non-zero elements and red and blue – only 1/4 such pixels. In the case of half-pixel grid (to handle fraction-pixel shifts) the relative amount of non-zero pixels is four times smaller, so the total number of multiplications is the same as for the whole-pixel grid.

    The goal of such factorization is to minimize the number of the non-zero elements in the asymmetrical kernel, imposing no restrictions on the symmetrical one. Factorization does not have to be absolutely precise – the effect of deconvolution is limited by several factors, most important being the amplification of the sensor noise (such as shot noise). The required number of non-zero pixel may vary with the type of the distortion, for the lens we experimented with (Sunex DSL227 fisheye) just 4 pixels were sufficient to achieve 2-4% error for each of the kernel tiles. Four pixel kernels make it 1 multiplication per each of the red and blue pixels and 2 multiplications per green. As the kernels are calculated during the camera off-line calibration it should be possible to simultaneously generate scheduling of the the DSP and buffer memories to additionally reduce the required run-time FPGA resources.

    Fig.3 illustrates how the deconvolution kernel for the aberration correction is split into two for the consecutive convolutions. Fig.1a shows the required deconvolution kernel determined during the existing calibration procedure. This kernel is shown far off-center even for the green channel – it appeared near the edge of the fish-eye lens field of view as the current lens model is based on the radial polynomial and is not efficient for the fish-eye (f-theta) lenses, so aberration correction by deconvolution had to absorb that extra shift. As the convolution kernel has fixed non-zero elements, the computation complexity does not depend on the maximal kernel dimensions. Fig.3b shows the determined asymmetric convolution kernel of 4 pixels, and Fig.3c – the kernel for symmetric convolution with MDCT, the unique 8×8 pixels part of it (inside of the red square) is replicated to the other3 quadrants by mirroring along row 0 and column 0 because of the whole pixel even symmetry – right boundary condition for DCT-III. Fig.3d contains result of the DCT-III applied to the data shown in Fig.3c.

    Fig.4. Symmetric convolution kernel tiles in MDCT domain

    Fig.4. Symmetric convolution kernel tiles in MDCT domain. Full image (click to open) has peripheral kernels replicated as there was no calibration data outside of the fisheye lens filed of view

    There should be some more efficient ways to find optimal combinations of the two kernels, currently I used a combination of the Levenberg-Marquardt Algorithm (LMA) that minimizes approximation error (root mean square of the differences between the given kernel and the result of the convolution of the two calculated) and adding/replacing pixels in the asymmetrical kernel, sorting the variants for the best LMA fit. Experimental code (FactorConvKernel.java) for the kernel calculation is in the same git repository.

    Each kernel tile is processed independently of the neighbors, so while the aberration deconvolution kernels are changing smoothly between the adjacent tiles, the individual asymmetrical (for direct convolution with Bayer signal data) and symmetrical (for convolution by multiplication in the MDCT space) may change dramatically (see Fig.4). But when the direct convolution is applied before the window multiplication to the source pixels that contribute to a 16×16 pixel MDCT overlapping tile, then the result (after IMDCT) depends on the convolution of the two kernels, not the individual ones.

    Deconvolving the test image

    Next step was to apply the convolution to the test image, see if there are any visible blocking (or other) artifacts and if the image sharpness was improved. Only a single (green) channel was tested as there is no DCT-based color conversion code in this program yet. Program was tested with the whole pixel grid (not half pixel) so some reduction of sharpness caused by fractional pixel shift was expected. For the comparison “before/after” aberration correction I used two pairs – one with the raw Bayer data (half of the pixels are black in a checker-board pattern) and the other – with the Bayer pattern after 0.4 pix low-pass filter to reduce the checkerboard pattern. Without this filtering image would be either twice darker or (as on these pictures) saturated at lower levels (checkerboard 0/255 alternating pixels result in average gray level of only half of the full range).

    Fig.5. Alternating image segment (green channel only): low-pass filter of the Bayer mosaic before and after deconvolution.

    Fig.5. Alternating images of a segment (green channel only): low-pass filter of the Bayer mosaic before and after deconvolution. Click image to show comparison with raw Bayer component.
    Raw Bayer
    Bayer data, low pass filter, sigma = 0.4 pix
    Deconvolved

    Fig.5 shows animated GIF of a fraction of the whole image, clicking the image shows comparison to the raw Bayer (with the limited gray level), caption links the full size images for these 3 modes.

    No de-noise code is used, so amplification of the pixel shot noise is clearly visible, especially on the uniform surfaces, but aliasing cancellation remained functional even with abrupt changing of the convolution kernels as ones shown in Fig.4.

    Conclusions

    Algorithms suitable for FPGA implementation are tested with the simulation code. Processing of the images subject to the typical optical aberration of the fisheye lens DSL227 does not add significantly to the computational complexity compared to the pure symmetric convolution using lapped MDCT based on the 8×8 pixels two-dimensional DCT-IV.

    This solution can be used as a first stage of the real time image correction and rectification, capable of sub-pixel resolution in multiple application areas, such as 3-d reconstruction and autonomous navigation.

    References

    [1] Plonka, Gerlind, and Manfred Tasche. “Fast and numerically stable algorithms for discrete cosine transforms.” Linear algebra and its applications 394 (2005): 309-345.
    [2] Martucci, Stephen A. “Symmetric convolution and the discrete sine and cosine transforms.” IEEE Transactions on Signal Processing 42.5 (1994): 1038-1051. pdf
    [3] Suresh, K., and T. V. Sreenivas. “Linear filtering in DCT IV/DST IV and MDCT/MDST domain.” Signal Processing 89.6 (2009): 1081-1089. Abstract and full text pdf.
    [4] Reju, Vaninirappuputhenpurayil Gopalan, Soo Ngee Koh, and Ing Yann Soon. “Convolution using discrete sine and cosine transforms.” IEEE Signal Processing Letters 14.7 (2007): 445. pdf
    [5] Malvar, Henrique S. “Extended lapped transforms: Properties, applications, and fast algorithms.” IEEE Transactions on Signal Processing 40.11 (1992): 2703-2714.

    by Andrey Filippov at January 08, 2017 01:19 AM

    December 30, 2016

    Harald Welte

    Some thoughts on 33C3

    I've just had the pleasure of attending all four days of 33C3 and have returned home with somewhat mixed feelings.

    I've been a regular visitor and speaker at CCC events since 15C3 in 1998, which among other things means I'm an old man now. But I digress ;)

    The event has come extremely far in those years. And to be honest, I struggle with the size. Back then, it was a meeting of like-minded hackers. You had the feeling that you know a significant portion of the attendees, and it was easy to connect to fellow hackers.

    These days, both the number of attendees and the size of the event make you feel much rather that you're in general public, rather than at some meeting of fellow hackers. Yes, it is good to see that more people are interested in what the CCC (and the selected speakers) have to say, but somehow it comes at the price that I (and I suspect other old-timers) feel less at home. It feels too much like various other technology related events.

    One aspect creating a certain feeling of estrangement is also the venue itself. There are an incredible number of rooms, with a labyrinth of hallways, stairs, lobbies, etc. The size of the venue simply makes it impossible to simply _accidentally_ running into all of your fellow hackers and friends. If I want to meet somebody, I have to make an explicit appointment. That is an option that exits most of the rest of the year, too.

    While fefe is happy about the many small children attending the event, to me this seems somewhat alien and possibly inappropriate. I guess from teenage years onward it certainly makes sense, as they can follow the talks and participate in the workshop. But below that age?

    The range of topics covered at the event also becomes wider, at least I feel that way. Topics like IT security, data protection, privacy, intelligence/espionage and learning about technology have always been present during all those years. But these days we have bloggers sitting on stage and talking about bottles of wine (seriously?).

    Contrary to many, I also really don't get the excitement about shows like 'Methodisch Inkorrekt'. Seems to me like mainstream compatible entertainment in the spirit of the 1990ies Knoff Hoff Show without much potential to make the audience want to dig deeper into (information) technology.

    by Harald Welte at December 30, 2016 12:00 AM

    33C3 talk on dissecting cellular modems

    Yesterday, together with Holger 'zecke' Freyther, I co-presented at 33C3 about Dissectiong modern (3G/4G) cellular modems.

    This presentation covers some of our recent explorations into a specific type of 3G/4G cellular modems, which next to the regular modem/baseband processor also contain a Cortex-A5 core that (unexpectedly) runs Linux.

    We want to use such modems for building self-contained M2M devices that run the entire application inside the modem itself, without any external needs except electrical power, SIM card and antenna.

    Next to that, they also pose an ideal platform for testing the Osmocom network-side projects for running GSM, GPRS, EDGE, UMTS and HSPA cellular networks.

    You can find the Slides and the Video recordings in case you're interested in more details about our work.

    The results of our reverse engineering can be found in the wiki at http://osmocom.org/projects/quectel-modems/wiki together with links to the various git repositories containing related tools.

    As with all the many projects that I happen to end up doing, it would be great to get more people contributing to them. If you're interested in cellular technology and want to help out, feel free to register at the osmocom.org site and start adding/updating/correcting information to the wiki.

    You can e.g. help by

    • playing with the modem and documenting your findings
    • reviewing the source code released by Qualcomm + Quectel and documenting your findings
    • help us to create a working OE build with our own kernel and rootfs images as well as opkg package feeds for the modems
    • help reverse engineering DIAG and QMI protocols as well as the open source programs to interact with them

    by Harald Welte at December 30, 2016 12:00 AM

    December 29, 2016

    Harald Welte

    Contribute to Osmocom 3.5G and receive a free femtocell

    In 2016, Osmocom gained initial 3.5G support with osmo-iuh and the Iu interface extensions of our libmsc and OsmoSGSN code. This means you can run your own small open source 3.5G cellular network for SMS, Voice and Data services.

    However, the project needs more contributors: Become an active member in the Osmocom development community and get your nano3G femtocell for free.

    I'm happy to announce that my company sysmocom hereby issues a call for proposals to the general public. Please describe in a short proposal how you would help us improving the Osmocom project if you were to receive one of those free femtocells.

    Details of this proposal can be found at https://sysmocom.de/downloads/accelerate_3g5_cfp.pdf

    Please contact mailto:accelerate3g5@sysmocom.de in case of any questions.

    by Harald Welte at December 29, 2016 12:00 AM

    December 25, 2016

    ZeptoBARS

    ST USBLC6-2 - USB protection chip : weekend die-shot

    ST USBLC6-2 has 4 diodes and 1 Zener to protect your USB gear.
    Die size 1084x547 µm.


    December 25, 2016 09:01 PM

    December 24, 2016

    Bunnie Studios

    Name that Ware December 2016

    The Ware for December 2016 is below.

    Wishing everyone a safe and happy holiday season!

    by bunnie at December 24, 2016 05:08 PM

    Winner, Name that Ware November 2016

    The Ware for November 2016 is a Link Instruments MSO-28 USB scope. Congrats to Antoine for the first guess which got the model number correct, email me for your prize!

    by bunnie at December 24, 2016 05:08 PM

    December 23, 2016

    Elphel

    Measuring SSD interrupt delays


    Sometimes we need to test disks connected to camera and find out if a particular model is a good candidate for in-camera stream recording application. Such disks should not only be fast enough in terms of write speed, but they should have short ‘response time’ to write commands. This ‘response time’ is basically the time between command sent to disk and a response from disk that this command has finished. The time between the two events is related to total write speed, but it can vary due to processes going on in internal disk controller. The fluctuations in disk response time can be an important parameter for high bandwidth streaming applications in embedded systems as this value allows to estimate the data buffer size needed during recording, but this may be not very critical parameter for typical PC applications as modern computers are equipped with large amount of RAM. We have not found any suitable parameter in disk specifications we had which would give us a hint for the buffer size estimation and developed a small test program for this purpose.

    This program basically resembles camogm (in-camera recording program) in its operation and allows us to write repeating blocks of data containing counter value and then check the consistency of the data written. This program works directly with disk driver and collects some statistics during its operation. Disk driver, among other things, measures the time between two events: when write command is issued and when command completion interrupt from controller is received. This time can be used to measure disk write speed as the amount of data sent to disk with each command is also known. In general, this time slightly floats around its average value given that the amount of data written with each command is almost the same. But long run tests have shown that sometimes the interrupt return time after write command can be much longer then the average time.

    We decided to investigate this situation in a little bit more details and tested two SSDs with our test program. The disks used for tests were SanDisk SD8SMAT128G1122 and Crucial CT250MX200SSD6, both were connected to eSATA camera port over M.2 SSD adapter. We used these disks before and they demonstrated different performance during recording. We ran camogm_test to write 3 MB blocks of data in cyclic mode. The program collected delayed interrupt times reported by driver as well as the amount of data written since the last delay event. The processed results of the test:

    crucial-irq-distribution_bars_1
    sandisk-irq-distribution_bars_1

    Actual points of interest on these charts are circled in red and they show those delays that are noticeably different from average values. Below is the same data in table form:

    Disk Average IRQ reception time, ms Standard deviation, ms Average IRQ delay time, ms Standard deviation, ms Data recorded since last IRQ delay, GB Standard deviation, GB
    CT250MX200SSD6 (250 GB) 11.9 1.1 804 12.7 499.7 111.7
    SD8SMAT128G1122 (128 GB) 19.3 4.8 113 6.5 231.5 11.5

    The delayed interrupt times of these disks are considerably different although the difference in average interrupt times which reflect disk write speeds is not that big. It is interesting to notice that the amount of data written to disk between two consecutive interrupt delays is almost twice the total disk size. smartctl reported the increase of Runtime_Bad_Block attribute for CT250MX200SSD6 after each delay but the delays occurred each time on different LBAs. Unfortunately, SD8SMAT128G1122 does not have such parameter in its smartctl attributes and it is difficult to compare the two disks by this parameter.

    by Mikhail Karpenko at December 23, 2016 01:56 AM

    December 18, 2016

    ZeptoBARS

    LM338K - 5A LDO in TO-3 : weekend die-shot



    Die size 1834x1609 µm.

    You can see why this giant package is nearly obsolete these days (it's been around since 1955) - tiny crystal on a large steel case is largely limited by steel thermal conduction. Modern packages with copper base could do better with much smaller packages.


    December 18, 2016 02:03 PM

    December 17, 2016

    ZeptoBARS

    DTA143ZK - PNP BJT with bias resistors : weekend die-shot

    Comparing to Infinion BCR185W there are no even bias resistors under the pads, hence larger die size (426x424 µm).


    December 17, 2016 10:10 PM

    Elphel

    DCT type IV implementation

    As we finished with the basic camera functionality and tested the first Eyesis4π built with the new 10393 system boards (it is smaller, requires less power and, is faster) we are moving forward with the in-camera image processing. We plan to combine our current camera calibration methods that require off-line post processing and the real-time image correction using the camera own FPGA resources. This project development will require switching between the actual FPGA coding and the software implementation of the same algorithms before going to the next step – software is still easier to design. The first part was in FPGA realm – it was to implement the fundamental image processing block that we already know we’ll be using and see how much of the resources it needs.

    DCT type IV as a building block for in-camera image processing

    We consider a small (8×8 pixel) DCT-IV to be a universal block for conditioning of the raw acquired images. Such operations as lens optical aberrations correction, color conversion (de-mosaic) in the presence of the lateral chromatic aberration, image rectification (de-warping) are easier to perform in the frequency domain using convolution-multiplication property and other algorithms.

    In post-processing we use DFT (Discrete Fourier Transform) over rather large (64×64 to 512×512) tiles, but that would be too much for the in-camera processing. First is the tile size – for good lenses we do not need that large convolution kernels. Additionally we plan to combine several processing steps into one (based on our off-line post-processing experience) and so we do not need to sub-sample images – in our current software we double resolution of the raw images at the beginning and scale back the final result to reduce image degradation caused by re-sampling.

    The second area where we plan to reduce computations is the replacement of the DFT with the DCT that is designed to be fed with the pure real data and so requires less arithmetic operations than DFT that processes complex input values.

    Why “type IV” of the DCT?

    Fig.1. Signal flow graph for DCT-IV

    Fig.1. Signal flow graph for DCT-IV

    We already have DCT type II implemented for the JPEG/JP4 compression, and we still needed another one. Type IV is used in audio compression because it can be converted to a modified discrete cosine transform (MDCT) – a procedure when multiple overlapped windows are processed one at a time and the results are seamlessly combined without any block artifacts that are familiar for the JPEG with low settings of the compression quality. We too need lapped transform to process large images with relatively small (much smaller than the image itself) convolution kernels, and DCT-IV is a perfect fit. 8-point DCT-IV allows to implement transformation of 16-point segments with 8-point overlap in a reversible manner – the inverse transformation of 8-point data may be converted to 16-point overlapping segments, and being added together these segments result in the original data.

    There is a price though to pay for switching from DFT to DCT – the convolution-multiplication property being so straightforward in FFT gets complicated for DCT[1]. While convolving with symmetrical kernels is still simple (just the kernel has to be transformed differently, but it is anyway done off-line in our case), the arbitrary kernel convolution (or just a shift in image space needed to compensate the lateral chromatic aberration) requires both DCT-IV and DST-IV transformed data. DST-IV can be calculated with the same DCT-IV modules (just by reversing the direction of input data and alternating the sign of the output samples), but it still requires additional hardware resources and/or more processing time. Luckily it is only needed for the direct (image domain to frequency domain) transform, the inverse transform IDCT-IV (frequency to image) does not require DST. And IDCT-IV is actually the same as the direct DCT-IV, so we can again instantiate the same module.

    Most of the two-dimensional transforms combine 1-d transform modules (because DCT is a separable transform), so we too started with just an 8-point DCT. There are multiple known factorizations for such algorithm[2] and we used one of them (based on BinDCT-IV) shown in Fig.1.

    Fig.2. Simplified diagram of Xilinx DSP48E1 primitive (only used functionality is shown)

    Fig.2. Simplified diagram of Xilinx DSP48E1 primitive (only used functionality is shown)

    DSP primitive in Xilinx Zynq

    This algorithm is implemented with a pair of DSP48E1[3] primitives shown in Fig.2. This primitive is flexible and allows to configure different functionality, the diagram contains only the blocks and connections used in the current project. The central part is the multiplier (signed 18 bits by signed 25 bits) with inputs from a pair of multiplexed B registers (B1 and B2, 18 bits wide) and the pre-adder AD register (25 bits). The AD register stores sum/difference of the 25-bit D-register and a multiplexed pair of 25-bit A1 and A2 registers. Any of the inputs can be replaced by zero, so AD can receive D, A1, A2, -A1, -A2, D+A1, D-A1, D+A2 and D-A2 values. Result of the multiplier (43 bits) is stored in the M register and the data from M is combined with the 48-bit output accumulator register P. Final adder can add or subtract M to/from one of the P, 48-bit C-register or just 0, so the output P register can receive +/-M, P+/-M and C+/-M. The wrapper module dsp_ma_preadd_c.v reduces the number of DSP48E1 signals and parameters to those required for the project and in addition to the primitive instance have a simple model of the DSP slice to allow simulation without the DSP48E1 source code for convenience.

    Fig.3. One-dimensional 8-point DCT-IV implementation

    Fig.3. One-dimensional 8-point DCT-IV implementation

    8-point DCT-IV transform

    The DCT-IV implementation module (Fig.3.) operates in 16 clocks cycles (2 clock periods per data item) and the input/output permutations are not included – they can be absorbed in the data source and destination memories. Current implementation does not implement correct rounding and saturation to save resources – such processing can be added to the outputs after analysis for particular application data widths. This module is not in the coder/decoder signal chain so bit-accuracy is not required.

    Data is output each other cycle (so two such modules can easily be used to increase bandwidth), while input data is scrambled more, some of the items have to appear twice in a 16-cycle period. This implementation uses two of the DSP48E1 primitives connected in series. First one implements the left half of the Fig.1. graph – 3 rotators (marked R8, and two of R4), four adders, and four subtracters, The second one corresponds to the right half with R1, R5, R9, R13, four adders, and four subtracters. Two of the small memories (register files) – 2 locations before the first DSP and 4 locations before the second effectively increase the number of the DSP internal D registers. The B inputs of the DSPs receive cosine coefficients, the same ROM provides values for both DSP stages.

    The diagram shows just the data paths, all the DSP control signals as well as the memories write and read addresses are generated at the defined times decoded from the 16-cycle period. The decoder is based on the spreadsheet draft of the design.

    Fig.4. Two-dimensional 8x8 DCT-IV

    Fig.4. Two-dimensional 8×8 DCT-IV

    Two-dimensional 8×8 points DCT-IV

    Next diagram Fig.4. shows a two-dimensional DCT type IV implementation using four of the 1-d 8-point DCT-IV modules described above. Input data arrives continuously in line-scan order, next 64-item block may follow either immediately or after a delay of at least 16 cycles so the pipelines phases are correctly restarted. Two of the input 8×25 memories (width can be reduced to match input data, 25 is the width of the DSP48E1 inputs) are used to re-order the input data.As each of the 1-d DCT modules require input data at more than a half cycles (see bottom of Fig.3) interleaving with the common memory for both channels is not possible, so each channel has to have a dedicated one. First of the two DCT modules convert even lines of 8 points, the other one – odd lines. The latency of the data output from the RAM in the second channel is made 1 cycle longer, so the output data from the channels also arrive at odd/even time slots and can be multiplexed to a common transpose buffer memory. Minimal size of the buffer is 2 of the 64 item pages (width can be reduced to match application requirements), but having just a two-page buffer increases the minimal pause time between blocks (if they are not immediate), with a four page buffer (and BRAM primitives are larger even if just halves are used) the minimal non-immediate delay of the 16 cycles of a 1-d module is still valid.

    The second (vertical) pass is similar to the first (horizontal) one, it also has individual small memories for input data reordering and 2 output de-scrambler memories. It is possible to use a single stage, but the memory should hold at least 17 items (>16) and the primitives are 16-deep, and I believe that splitting in series makes it easier for the placer/router tools to implement the design.

    Next steps

    Now when the 8×8 point DCT-IV is designed and simulated the next step is to switch to the Java coding (add to our ImageJ plugin for camera calibration and image post-processing), convert calibration data to the form suitable for the future migration to FPGA and try the processing based on the chosen 8×8 DCT-IV. When satisfied with the results – continue with the FPGA coding.

    References

    [1] Martucci, Stephen A. “Symmetric convolution and the discrete sine and cosine transforms.” IEEE Transactions on Signal Processing 42.5 (1994): 1038-1051. pdf

    [2] Britanak, Vladimir, Patrick C. Yip, and Kamisetty Ramamohan Rao. Discrete cosine and sine transforms: general properties, fast algorithms and integer approximations. Academic Press, 2010.

    [3] 7 Series DSP48E1 Slice, UG479 (v1.9), Xilinx, Sep. 2016. pdf

    by Andrey Filippov at December 17, 2016 07:15 AM

    December 16, 2016

    Harald Welte

    Accessing 3GPP specs in PDF format

    When you work with GSM/cellular systems, the definite resource are the specifications. They were originally released by ETSI, later by 3GPP.

    The problem start with the fact that there are separate numbering schemes. Everyone in the cellular industry I know always uses the GSM/3GPP TS numbering scheme, i.e. something like 3GPP TS 44.008. However, ETSI assigns its own numbers to the specs, like ETSI TS 144008. Now in most cases, it is as simple s removing the '.' and prefixing the '1' in the beginning. However, that's not always true and there are exceptions such as 3GPP TS 01.01 mapping to ETSI TS 101855. To make things harder, there doesn't seem to be a machine-readable translation table between the spec numbers, but there's a website for spec number conversion at http://webapp.etsi.org/key/queryform.asp

    When I started to work on GSM related topics somewhere between my work at Openmoko and the start of the OpenBSC project, I manually downloaded the PDF files of GSM specifications from the ETSI website. This was a cumbersome process, as you had to enter the spec number (e.g. TS 04.08) in a search window, look for the latest version in the search results, click on that and then click again for accessing the PDF file (rather than a proprietary Microsoft Word file).

    At some point a poor girlfriend of mine was kind enough to do this manual process for each and every 3GPP spec, and then create a corresponding symbolic link so that you could type something like evince /spae/openmoko/gsm-specs/by_chapter/44.008.pdf into your command line and get instant access to the respective spec.

    However, of course, this gets out of date over time, and by now almost a decade has passed without a systematic update of that archive.

    To the rescue, 3GPP started at some long time ago to not only provide the obnoxious M$ Word DOC files, but have deep links to ETSI. So you could go to http://www.3gpp.org/DynaReport/44-series.htm and then click on 44.008, and one further click you had the desired PDF, served by ETSI (3GPP apparently never provided PDF files).

    However, in their infinite wisdom, at some point in 2016 the 3GPP webmaster decided to remove those deep links. Rather than a nice long list of released versions of a given spec, http://www.3gpp.org/DynaReport/44008.htm now points to some crappy JavaScript tabbed page, where you can click on the version number and then get a ZIP file with a single Word DOC file inside. You can hardly male it any more inconvenient and cumbersome. The PDF links would open immediately in modern browsers built-in JavaScript PDF viewer or your favorite PDF viewer. Single click to the information you want. But no, the PDF links had to go and replaced with ZIP file downloads that you first need to extract, and then open in something like LibreOffice, taking ages to load the document, rendering it improperly in a word processor. I don't want to edit the spec, I want to read it, sigh.

    So since the usability of this 3GPP specification resource had been artificially crippled, I was annoyed sufficiently well to come up with a solution:

    • first create a complete mirror of all ETSI TS (technical specifications) by using a recursive wget on http://www.etsi.org/deliver/etsi_ts/
    • then use a shell script that utilizes pdfgrep and awk to determine the 3GPP specification number (it is written in the title on the first page of the document) and creating a symlink. Now I have something like 44.008-4.0.0.pdf -> ts_144008v040000p.pdf

    It's such a waste of resources to have to download all those files and then write a script using pdfgrep+awk to re-gain the same usability that the 3GPP chose to remove from their website. Now we can wait for ETSI to disable indexing/recursion on their server, and easy and quick spec access would be gone forever :/

    Why does nobody care about efficiency these days?

    If you're also an avid 3GPP spec reader, I'm publishing the rather trivial scripts used at http://git.osmocom.org/3gpp-etsi-pdf-links

    If you have contacts to the 3GPP webmaster, please try to motivate them to reinstate the direct PDF links.

    by Harald Welte at December 16, 2016 12:00 AM

    December 15, 2016

    ZeptoBARS

    LM1813 - early anti-skid chip : weekend die-shot

    LM1813 - anti-skid chip, was the largest analog die National Semiconductor had built to date as of 1974. It was built as a custom for a brake system vendor to Ford Motor company for use in their pickup trucks.

    Die size 2234x1826 µm.



    Test chips on the wafer:


    Thanks for the wafers goes to Bob Miller, one of designers of this chip.

    December 15, 2016 11:01 AM

    December 12, 2016

    Free Electrons

    Linux 4.9 released, Free Electrons contributions

    Linus Torvalds has released the 4.9 Linux kernel yesterday, as was expected. With 16214 non-merge commits, this is by far the busiest kernel development cycle ever, but in large part due to the merging of thousands of commits to add support for Greybus. LWN has very well summarized what’s new in this kernel release: 4.9 Merge window part 1, 4.9 Merge window part 2, The end of the 4.9 merge window.

    As usual, we take this opportunity to look at the contributions Free Electrons made to this kernel release. In total, we contributed 116 non-merge commits. Our most significant contributions this time have been:

    • Free Electrons engineer Boris Brezillon, already a maintainer of the Linux kernel NAND subsystem, becomes a co-maintainer of the overall MTD subsystem.
    • Contribution of an input ADC resistor ladder driver, written by Alexandre Belloni. As explained in the commit log: common way of multiplexing buttons on a single input in cheap devices is to use a resistor ladder on an ADC. This driver supports that configuration by polling an ADC channel provided by IIO.
    • On Atmel platforms, improvements to clock handling, bug fix in the Atmel HLCDC display controller driver.
    • On Marvell EBU platforms
      • Addition of clock drivers for the Marvell Armada 3700 (Cortex-A53 based), by Grégory Clement
      • Several bug fixes and improvements to the Marvell CESA driver, for the crypto engine founds in most Marvell EBU processors. By Romain Perier and Thomas Petazzoni
      • Support for the PIC interrupt controller, used on the Marvell Armada 7K/8K SoCs, currently used for the PMU (Performance Monitoring Unit). By Thomas Petazzoni.
      • Enabling of Armada 8K devices, with support for the slave CP110 and the first Armada 8040 development board. By Thomas Petazzoni.
    • On Allwinner platforms
      • Addition of GPIO support to the AXP209 driver, which is used to control the PMIC used on most Allwinner designs. Done by Maxime Ripard.
      • Initial support for the Nextthing GR8 SoC. By Mylène Josserand and Maxime Ripard (pinctrl driver and Device Tree)
      • The improved sunxi-ng clock code, introduced in Linux 4.8, is now used for Allwinner A23 and A33. Done by Maxime Ripard.
      • Add support for the Allwinner A33 display controller, by re-using and extending the existing sun4i DRM/KMS driver. Done by Maxime Ripard.
      • Addition of bridge support in the sun4i DRM/KMS driver, as well as the code for a RGB to VGA bridge, used by the C.H.I.P VGA expansion board. By Maxime Ripard.
    • Numerous cleanups and improvements commits in the UBI subsystem, in preparation for merging the support for Multi-Level Cells NAND, from Boris Brezillon.
    • Improvements in the MTD subsystem, by Boris Brezillon:
      • Addition of mtd_pairing_scheme, a mechanism which allows to express the pairing of NAND pages in Multi-Level Cells NANDs.
      • Improvements in the selection of NAND timings.

    In addition, a number of Free Electrons engineers are also maintainers in the Linux kernel, so they review and merge patches from other developers, and send pull requests to other maintainers to get those patches integrated. This lead to the following activity:

    • Maxime Ripard, as the Allwinner co-maintainer, merged 78 patches from other developers.
    • Grégory Clement, as the Marvell EBU co-maintainer, merged 43 patches from other developers.
    • Alexandre Belloni, as the RTC maintainer and Atmel co-maintainer, merged 26 patches from other developers.
    • Boris Brezillon, as the MTD NAND maintainer, merged 24 patches from other developers.

    The complete list of our contributions to this kernel release:

    by Thomas Petazzoni at December 12, 2016 04:06 PM

    Software architecture of Free Electrons’ lab

    As stated in a previous blog post, we officially launched our lab on 2016, April 25th and it is contributing to KernelCI since then. In a series of blog post, we’d like to present in details how our lab is working.

    We previously introduced the lab and its integration in KernelCI, and presented its hardware infrastructure. Now is time to explain how it actually works on the software side.

    Continuous integration in Linux kernel

    Because of Linux’s well-known ability to run on numerous platforms and the obvious impossibility for developers to test changes on all these platforms, continuous integration has a big role to play in Linux kernel development and maintenance.

    More generally, continuous integration is made up of three different steps:

    • building the software which in our case is the Linux kernel,
    • testing the software,
    • reporting the tests results;
    KernelCI complete process

    KernelCI complete process

    KernelCI checks hourly if one of the Git repositories it tracks have been updated. If it’s the case then it builds, from the last commit, the kernel for ARM, ARM64 and x86 platforms in many configurations. Then it stores all these builds in a publicly available storage.

    Once the kernel images have been built, KernelCI itself is not in charge of testing it on hardware. Instead, it delegates this work to various labs, maintained by individuals or organizations. In the following section, we will discuss the software architecture needed to create such a lab, and receive testing requests from KernelCI.

    Core software component: LAVA

    At this moment, LAVA is the only supported software by KernelCI but note that KernelCI offers an API, so if LAVA does not meet your needs, go ahead and make your own!

    What is LAVA?

    LAVA is a self-hosted software, organized in a server-dispatcher model, for controlling boards, to automate boot, bootloader and user-space testing. The server receives jobs specifying what to test, how and on which boards to run those tests, and transmits those jobs to the dispatcher linked to the specified board. The dispatcher applies all modifications on the kernel image needed to make it boot on the said board and then fully interacts with it through the serial.

    Since LAVA has to fully and autonomously control boards, it needs to:

    • interact with the board through serial connection,
    • control the power supply to reset the board in case of a frozen kernel,
    • know the commands needed to boot the kernel from the bootloader,
    • serve files (kernel, DTB, rootfs) to the board.

    The first three requirements are fulfilled by LAVA thanks to per-board configuration files. The latter is done by the LAVA dispatcher in charge of the board, which downloads files specified in the job and copies them to a directory accessible by the board through TFTP.

    LAVA organizes the lab in devices and device types. All identical devices are from the same device type and share the same device type configuration file. It contains the set of bootloader instructions to boot the kernel (e.g.: how and where to load files) and the bootloader configuration (e.g.: can it boot zImages or only uImages). A device configuration file stores the commands run by a dispatcher to interact with the device: how to connect to serial, how to power it on and off. LAVA interacts with devices via external tools: it has support for conmux or telnet to communicate via serial and power commands can be executed by custom scripts (pdudaemon for example).

    Control power supply

    Some labs use expensive Switched PDUs to control the power supply of each board but, as discussed in our previous blog post we went for several Devantech ETH008 Ethernet-controlled relay boards instead.

    Linaro, the organization behind LAVA, has also developed a software for controlling power supplies of each board, called pdudaemon. We added support for most Devantech relay boards to pdudaemon.

    Connect to serial

    As advised in LAVA’s installation guide, we went with telnet and ser2net to connect the serial port of our boards. Ser2net basically opens a Linux device and allows to interact with it through a TCP socket on a defined port. A LAVA dispatcher will then launch a telnet client to connect to a board’s serial port. Because of the well-known fact that Linux devices name might change between reboots, we had to use udev rules in order to guarantee the serial we connect to is the one we want to connect to.

    Actual testing

    Now that LAVA knows how to handle devices, it has to run jobs on those devices. LAVA jobs contain which images to boot (kernel, DTB, rootfs), what kind of tests to run when in user space and where to find them. A job is strongly linked to a device type since it contains the kernel and DTB specifically built for this device type.

    Those jobs are submitted to the different labs by the KernelCI project. To do so, KernelCI uses a tool called lava-ci. Amongst other things, this tool contains a big table of the supported platforms, associating the Device Tree name with the corresponding hardware platform name. This way, when a new kernel gets built by KernelCI, and produces a number of Device Tree Blobs (.dtb files), lava-ci knows what are the corresponding hardware platforms to run the kernel on. It submits the jobs to all the labs, which will then only run the tests for which they have the necessary hardware platform. We have contributed a number of patches to lava-ci, adding support for the new platforms we had in our lab.

    LAVA overall architecture

    Reporting test results

    After KernelCI has built the kernel, sent jobs to contributing labs and LAVA has run the jobs, KernelCI will then get the tests results from the labs, aggregate them on its website and notify maintainers of errors via a mailing list.

    Challenges encountered

    As in any project, we stumbled on some difficulties. The biggest problems we had to take care of were board-specific problems.

    Some boards like the Marvell RD-370 need a rising edge on a pin to boot, meaning we cannot avoid pressing the reset button between each boot. To work out this problem, we had to customize the hardware (swap resistors) to bypass this limitation.

    Some other boards lose their serial connection. Some lose it when resetting their power but recover it after a few seconds, problem we found acceptable to solve by infinitely reconnecting to the serial. However, we still have a problem with a few boards which randomly close their serial connection without any reason. After that, we are able to connect to the serial connection again but it does not send any character. The only way to get it to work again is to physically re-plug the cable used by the serial connection. Unfortunately, we did not find yet a way to solve this bug.

    The Linux kernel of our server refused to bind more than 13 USB devices when it was time to create a second drawer of boards. After some research, we found out the culprit was the xHCI driver. In modern computers, it is possible to disable xHCI support in the BIOS but this option was not present in our server’s BIOS. The solution was to rebuild and install a kernel for the server without the xHCI driver compiled. From that day, the number of USB devices is limited to 127 as in the USB specification.

    Conclusion

    We have now 35 boards in our lab, with some being the only ones represented in KernelCI. We encourage anyone, hobbyists or companies, to contribute to the effort of bringing continuous integration of the Linux kernel by building your own lab and adding as many boards as you can.

    Interested in becoming a lab? Follow the guide!

    by Quentin Schulz at December 12, 2016 01:05 PM

    December 07, 2016

    Harald Welte

    Open Hardware IEEE 802.15.4 adapter "ATUSB" available again

    Many years ago, in the aftermath of Openmoko shutting down, fellow former Linux kernel hacker Werner Almesberger was working on an IEEE 802.15.4 (WPAN) adapter for the Ben Nanonote.

    As a spin-off to that, the ATUSB device was designed: A general-purpose open hardware (and FOSS firmware + driver) IEEE 802.15.4 adapter that can be plugged into any USB port.

    /images/atusb.jpg

    This adapter has received a mainline Linux kernel driver written by Werner Almesberger and Stefan Schmidt, which was eventually merged into mainline Linux in May 2015 (kernel v4.2 and later).

    Earlier in 2016, Stefan Schmidt (the current ATUSB Linux driver maintainer) approached me about the situation that ATUSB hardware was frequently asked for, but currently unavailable in its physical/manufactured form. As we run a shop with smaller electronics items for the wider Osmocom community at sysmocom, and we also frequently deal with contract manufacturers for low-volume electronics like the SIMtrace device anyway, it was easy to say "yes, we'll do it".

    As a result, ready-built, programmed and tested ATUSB devices are now finally available from the sysmocom webshop

    Note: I was never involved with the development of the ATUSB hardware, firmware or driver software at any point in time. All credits go to Werner, Stefan and other contributors around ATUSB.

    by Harald Welte at December 07, 2016 12:00 AM

    December 06, 2016

    Harald Welte

    The IT security culture, hackers vs. industry consortia

    In a previous life I used to do a lot of IT security work, probably even at a time when most people had no idea what IT security actually is. I grew up with the Chaos Computer Club, as it was a great place to meet people with common interests, skills and ethics. People were hacking (aka 'doing security research') for fun, to grow their skills, to advance society, to point out corporate stupidities and to raise awareness about issues.

    I've always shared any results worth noting with the general public. Whether it was in RFID security, on GSM security, TETRA security, etc.

    Even more so, I always shared the tools, creating free software implementations of systems that - at that time - were very difficult to impossible to access unless you worked for the vendors of related device, who obviously had a different agenda then to disclose security concerns to the general public.

    Publishing security related findings at related conferences can be interpreted in two ways:

    On the one hand, presenting at a major event will add to your credibility and reputation. That's a nice byproduct, but that shouldn't be the primarily reason, unless you're some kind of a egocentric stage addict.

    On the other hand, presenting findings or giving any kind of presentation or lecture at an event is a statement of support for that event. When I submit a presentation at a given event, I think carefully if that topic actually matches the event.

    The reason that I didn't submit any talks in recent years at CCC events is not that I didn't do technically exciting stuff that I could talk about - or that I wouldn't have the reputation that would make people consider my submission in the programme committee. I just thought there was nothing in my work relevant enough to bother the CCC attendees with.

    So when Holger 'zecke' Freyther and I chose to present about our recent journeys into exploring modern cellular modems at the annual Chaos Communications Congress, we did so because the CCC Congress is the right audience for this talk. We did so, because we think the people there are the kind of community of like-minded spirits that we would like to contribute to. Whom we would like to give something back, for the many years of excellent presentations and conversations had.

    So far so good.

    However, in 2016, something happened that I haven't seen yet in my 17 years of speaking at Free Software, Linux, IT Security and other conferences: A select industry group (in this case the GSMA) asking me out of the blue to give them the talk one month in advance at a private industry event.

    I could hardly believe it. How could they? Who am I? Am I spending sleepless nights and non-existing spare time into security research of cellular modems to give a free presentation to corporate guys at a closed industry meeting? The same kind of industries that create the problems in the first place, and who don't get their act together in building secure devices that respect people's privacy? Certainly not. I spend sleepless nights of hacking because I want to share the results with my friends. To share it with people who have the same passion, whom I respect and trust. To help my fellow hackers to understand technology one step more.

    If that kind of request to undermine the researcher/authors initial publication among friends is happening to me, I'm quite sure it must be happening to other speakers at the 33C3 or other events, too. And that makes me very sad. I think the initial publication is something that connects the speaker/author with his audience.

    Let's hope the researchers/hackers/speakers have sufficiently strong ethics to refuse such requests. If certain findings are initially published at a certain conference, then that is the initial publication. Period. Sure, you can ask afterwards if an author wants to repeat the presentation (or a similar one) at other events. But pre-empting the initial publication? Certainly not with me.

    I offered the GSMA that I could talk on the importance of having FOSS implementations of cellular protocol stacks as enabler for security research, but apparently this was not to their interest. Seems like all they wanted is an exclusive heads-up on work they neither commissioned or supported in any other way.

    And btw, I don't think what Holger and I will present about is all that exciting in the first place. More or less the standard kind of security nightmares. By now we are all so numbed down by nobody considering security and/or privacy in design of IT systems, that is is hardly any news. IoT how it is done so far might very well be the doom of mankind. An unstoppable tsunami of insecure and privacy-invading devices, built on ever more complex technology with way too many security issues. We shall henceforth call IoT the Industry of Thoughtlessness.

    by Harald Welte at December 06, 2016 07:00 AM

    DHL zones and the rest of the world

    I typically prefer to blog about technical topics, but the occasional stupidity in every-day (business) life is simply too hard to resist.

    Today I updated the shipping pricing / zones in the ERP system of my company to predict shipping rates based on weight and destination of the package.

    Deutsche Post, the German Postal system is using their DHL brand for postal packages. They divide the world into four zones:

    • Zone 1 (EU)
    • Zone 2 (Europe outside EU)
    • Zone 3 (World)

    You would assume that "World" encompasses everything that's not part of the other zones. So far so good. However, I then stumbled about Zone 4 (rest of world). See for yourself:

    /images/dhl-rest_of_world.png

    So the World according to DHL is a very small group of countries including Libya and Syria, while countries like Mexico are rest of world

    Quite charming, I wonder which PR, communicatoins or marketing guru came up with such a disqualifying name. Maybe they should hve called id 3rd world and 4th world instead? Or even discworld?

    by Harald Welte at December 06, 2016 06:50 AM

    December 03, 2016

    ZeptoBARS

    CD4049 - hex CMOS inverter : weekend die-shot

    On CD4049 you can see 6 independent inverters, each having 3 inverters connected in series with increasing gate width on each stage - this helps to achieve higher speed and lower input capacitance. Gate length is 6µm, so it is probably the slowest CMOS circuit one can ever see. Gates are metal (i.e. not self-aligned silicon) which are again the slower type at that time.

    Die size 722x552 µm.


    December 03, 2016 03:27 PM

    Bunnie Studios

    NeTV2 FPGA Reference Design

    A complex system like NeTV2 consists of several layers of design. About a month ago, we pushed out the PCB design. But a PCB design alone does not a product make: there’s an FPGA design, firmware for the on-board MCU, host drivers, host application code, and ultimately layers in the cloud and beyond. We’re slowly working our way from the bottom up, assembling and validating the full system stack. In this post, we’ll talk briefly about the FPGA design.

    This design targets an Artix-7 XC7A50TCSG325-2 FPGA. As such, I opted to use Xilinx’s native Vivado design flow, which is free to download and use, but not open source. One of Vivado’s more interesting features is a hybrid schematic/TCL design flow. The designs themselves are stored as an XML file, and dynamically rendered into a schematic. The schematic itself can then be updated and modified by using either the GUI or TCL commands. This hybrid flow strikes a unique balance between the simplicity and intuitiveness of designing with a schematic, and the power of text-based scripting.


    Above: top-level schematic diagram of the NeTV2 FPGA reference design as rendered by the Vivado tools

    However, the main motivation to use Vivado is not the design entry methodology per se. Rather, it is Vivado’s tight integration with the AXI IP bus standard. Vivado can infer AXI bus widths, address space mappings, and interconnect fabric topology based on the types of blocks that are being strung together. The GUI provides some mechanisms to tune parameters such as performance vs. area, but it’s largely automatic and does the right thing. Being able to mix and match IP blocks with such ease can save months of design effort. However, the main downside of using Vivado’s native IP blocks is they are area-inefficient; for example, the memory-mapped PCI express block includes an area-intensive slave interface which is synthesized, placed, and routed — even if the interface is totally unused. Fortunately many of the IP blocks compile into editable verilog or VHDL, and in the case of the PCI express block the slave interface can be manually excised after block generation, but prior to synthesis, reclaiming the logic area of that unused interface.

    Using Vivado, I’m able to integrate a PCI-express interface, AXI memory crossbar, and DDR3 memory controller with just a few minutes of effort. With similar ease, I’ve added in some internal AXI-mapped GPIO pins to provide memory-mapped I/O within the FPGA, along with a video DMA master which can format data from the DDR3 memory and stream it out as raster-synchronous RGB pixel data. All told, after about fifteen minutes of schematic design effort I’m positioned to focus on coding my application, e.g. the HDMI decode/encode, HDCP encipher, key extraction, and chroma key blender.

    Below is the “hierarchical” view of this NeTV2 FPGA design. About 75% of the resources are devoted to the Vivado IP blocks, and about 25% to the custom NeTV application logic; altogether, the design uses about 72% of the XC7A50T FPGA’s LUT resources. A full-custom implementation of the Vivado IP blocks would save a significant amount of area, as well as be more FOSS-friendly, but it would also take months to implement an equivalent level of functionality.

    Significantly, the FPGA reference design shared here implements only the “basic” NeTV chroma-key based blending functionality, as previously disclosed here. Although we would like to deploy more advanced features such as alpha blending, I’m unable to share any progress because this operation is generally prohibited under Section 1201 of the DMCA. With the help of the EFF, I’m suing the US government for the right to disclose and share these developments with the general public, but until then, my right to express these ideas is chilled by Section 1201.

    by bunnie at December 03, 2016 07:17 AM

    December 02, 2016

    Free Electrons

    Buildroot 2016.11 released, Free Electrons contributions

    Buildroot LogoThe 2016.11 release of Buildroot has been published on November, 30th. The release announcement, by Buildroot maintainer Peter Korsgaard, gives numerous details about the new features and updates brought by this release. This new release provides support for using multiple BR2_EXTERNAL directories, gives some important updates to the toolchain support, adds default configurations for 9 new hardware platforms, and 38 new packages were added.

    On a total of 1423 commits made for this release, Free Electrons contributed a total of 253 commits:

    $ git shortlog -sn --author=free-electrons 2016.08..2016.11
       142  Gustavo Zacarias
       104  Thomas Petazzoni
         7  Romain Perier
    

    Here are the most important contributions we did:

    • Romain Perier contributed a package for the AMD Catalyst proprietary driver. Such drivers are usually not trivial to integrate, so having a ready-to-use package in Buildroot will really make it easier for Buildroot users who use hardware with an AMD/ATI graphics controller. This package provides both the X.org driver and the OpenGL implementation. This work was sponsored by one of Free Electrons customer.
    • Gustavo Zacarias mainly contributed a large set of patches that do a small update to numerous packages, to make sure the proper environment variables are passed. This is a preparation change to bring top-level parallel build in Buildroot. This work was also sponsored by another Free Electrons customer.
    • Thomas Petazzoni did contributions in various areas:
      • Added a DEVELOPERS file to the tree, to reference which developers are interested by which architectures and packages. Not only it allows the developers to be Cc’ed when patches are sent on the mailing list (like the get_maintainers script does), but it also used by Buildroot autobuilder infrastructure: if a package fails to build, the corresponding developer is notified by e-mail.
      • Misc updates to the toolchain support: switch to gcc 5.x by default, addition of gcc patches needed to fix various issues, etc.
      • Numerous fixes for build issues detected by Buildroot autobuilders

    In addition to contributing 104 commits, Thomas Petazzoni also merged 1095 patches from other developers during this cycle, in order to help Buildroot maintainer Peter Korsgaard.

    Finally, Free Electrons also sponsored the Buildroot project, by funding the meeting location for the previous Buildroot Developers meeting, which took place in October in Berlin, after the Embedded Linux Conference. See the Buildroot sponsors page, and also the report from this meeting. The next Buildroot meeting will take place after the FOSDEM conference in Brussels.

    by Thomas Petazzoni at December 02, 2016 03:14 PM

    Free Electrons at Linux.conf.au, January 2017

    Linux.conf.au, which takes place every year in January in Australia or New Zealand, is a major event of the Linux community. Free Electrons already participated to this event three years ago, and will participate again to this year’s edition, which will take place from January 16 to January 20 2017 in Hobart, Tasmania.

    Linux Conf Australia 2017

    This time, Free Electrons CTO Thomas Petazzoni will give a talk titled A tour of the ARM architecture and its Linux support, in which he will share with LCA attendees what is the ARM architecture, how its Linux support is working, what the numerous variants of ARM processors and boards mean, what is the Device Tree, the ARM specific bootloaders, and more.

    Linux.conf.au also features a number of other kernel related talks, such as the Kernel Report from Jonathan Corbet, Linux Kernel memory ordering: help arrives at last from Paul E. McKenney. The list of conferences is very impressive, and the event also features a number of miniconfs, including one on the Linux kernel.

    If some of our readers located in Australia, New Zealand or neighboring countries plan on attending the conference, do not hesitate to drop us a mail so that we can meet during the event!

    by Thomas Petazzoni at December 02, 2016 08:50 AM

    November 29, 2016

    Bunnie Studios

    Name that Ware, November 2016

    The Ware for November 2016 is shown below.

    Happy holidays!

    by bunnie at November 29, 2016 05:26 PM

    Winner, Name that Ware October 2016

    The Ware for October 2016 is a hard drive read head, from a 3.5″ Toshiba hard drive that I picked out of a trash heap. The drive was missing the cover which bore the model number, but based on the chips used on its logic board, the drive was probably made between 2011-2012. This photo was taken at about 40x magnification. Congrats to Jeff Epler for nailing the ware as the first guesser, email me for your prize!

    by bunnie at November 29, 2016 05:26 PM

    Free Electrons

    Hardware infrastructure of Free Electrons’ lab

    As stated in a previous blog post, we officially launched our lab on 2016, April 25th and it is contributing to KernelCI since then. In a series of blog post, we’d like to present in details how our lab is working, starting with this first blog post that details the hardware infrastructure of our lab.

    Introduction

    In a lab built for continuous integration, everything has to be fully automated from the serial connections to power supplies and network connections.

    To gather as much information as we can get to establish the specifications of the lab, our engineers filled a spreadsheet with all boards they wanted to have in the lab and their specificities in terms of connectors used the serial port communication and power supply. We reached around 50 boards to put into our lab. Among those boards, we could distinguish two different types:

    • boards which are powered by an ATX power supply,
    • boards which are powered by different power adapters, providing either 5V or 12V.

    Another design criteria was that we wanted to easily allow our engineers to take a board out of the lab or to add one. The easier the process is, the better the lab is.

    Home made cabinet

    Free Electrons' 8 drawers labTo meet the size constraints of Free Electrons office, we had to make the lab fit in a 100cm wide, 75cm deep and 200cm high space. In order to achieve this, we decided to build the lab as a large home made cabinet, with a number of drawers to easily access, change or replace the boards hosted in the lab. As some of our boards provide PCIe connectors, we needed to provide enough height for each drawer, and after doing a few measurements, decided that a 25cm height for our drawers would be fine. With a total height of 200cm, this gives a maximum of 8 drawers.

    In addition, it turns out that most of our boards powered by ATX power supplies are rather large in size, while the ones powered by regular power adapters are usually much smaller. In order to simplify the overall design, we decided that all large boards would be grouped together on a given set of drawers, and all small boards would be grouped together on another set of drawers: i.e we would not mix large and small boards in the same drawer. With the 100cm x 75cm size limitation, this meant a drawer for small boards could host up to 8 boards, while a drawer for large boards could host up to 4 boards. From the spreadsheet containing all the boards supposed to be in the lab, we eventually decided there would be 3 large drawers for up to 12 large boards and 5 small drawers for up to 40 small or medium-sized boards.

    Furthermore, since the lab will host a server and a lot of boards and power supplies, potentially producing a lot of heat, we have to keep the lab as open as it can be while making sure it is strong enough to hold the drawers. We ended up building our own cabinet, made of wood bought from the local hardware store.

    We also want the server to be part of the lab. We already have a small piece of wood to strengthen the lab between the fourth and sixth drawers we could use to fix the server. We decided to give a mini-PC (NUC-like) a try, because, after all, it’s only communicating with the serial of each board and serving files to them. Thus, everything related to the server is fixed and wired behind the lab.

    Make the lab autonomous

    What continuous integration for the Linux kernel typically needs are control of:

    1. the power for each board
    2. serial port connection
    3. a way to send files to test, typically the kernel image and associated files

    In Free Electrons lab, these different tasks are handled by a dedicated server, itself hosted in the lab.

    Serial port control

    Serial connections are mostly handled via USB on the server side but there are many different connectors on the target side (in our lab, we have 6 different connectors: DE9, microUSB, miniUSB, 2.54″ male pins, 2.54″ female pins and USB-B). Therefore, our server has to have a physical connection with each of the 50 boards present in the lab. The need for USB hubs is then obvious.

    Since we want as few cables connecting the server and the drawers as possible, we decided to have one USB hub per drawer, be it a large drawer or a small drawer. In a small drawer, up to 8 boards can be present, meaning the hub needs at least 8 USB ports. In a large drawer, up to 4 serial connections can be needed so smaller and more common USB hubs can do the work. Since the serial connection may draw some current on the USB port, we wanted all of our USB hubs to be powered with a dedicated power supply.

    All USB hubs are then connected to a main USB hub which in turn is connected to our server.

    Power supply control

    Our server needs to control each board’s power to be able to automatically power on or off a board. It will power on the board when it needs to test a new kernel on it and power it off at the end of the test or when the kernel has frozen or could not boot at all.

    In terms of power supplies, we initially investigated using Ethernet-controlled multi-sockets (also called Switched PDU), such as this device. Unfortunately, these devices are quite expensive, and also often don’t provide the most appropriate connector to plug the cheap 5V/12V power adapters used by most boards.

    So, instead, and following a suggestion from Kevin Hilman (one of KernelCI’s founder and maintainer), we decided to use regular ATX power supplies. They have the advantage of being inexpensive, and providing enough power for multiple boards and all their peripherals, potentially including hard drives or other power-hungry peripherals. ATX power supplies also have a pin, called PS_ON#, which when tied to the ground, powers up the ATX power supply. This easily allows to turn an ATX power supply on or off.

    In conjunction with the ATX power supplies, we have a selected Ethernet-controlled relay board, the Devantech ETH008, which contains 8 relays that can be remote controlled over the network.

    This gives us the following architecture:

    • For the drawers with large boards powered by ATX directly, we have one ATX power supply per board. The PS_ON pin from the ATX power supply is cut and rewired to the Ethernet controlled relay. Thanks to the relay, we control if PS_ON is tied to the ground or not. If it’s tied to the ground, then the board boots, when it’s untied from the ground, the board is powered off.
    • For the drawers with small boards, we have a single ATX power supply per drawer. The 12V and 5V rails from the ATX power supply are then dispatched through the 8-relay board, then connected to the appropriate boards, through DC barrel or mini-USB/micro-USB cables, depending on the board. The PS_ON is always tied to the ground, so those ATX power supplies are constantly on.

    In addition, we have added a bit of over-voltage protection, by adding transient-voltage-suppression diodes for each voltage output in each drawer. These diodes will absorb all the voltage when it exceeds the maximum authorized value and explode, and are connected in parallel in the circuit to protect.

    Network connectivity

    As part of the continuous integration process, most of our boards will have to fetch the Linux kernel to test (and potentially other related files) over the network through TFTP. So we need all boards to be connected to the server running the continuous integration software.

    Since a single 52 port switch is both fairly expensive, and not very convenient in terms of wiring in our situation, we instead opted for adding 8-port Gigabit switches to each drawer, all of them being connected via a central 16-port Gigabit switch located at the back of the home made cabinet. This central switch not only connects the per-drawer switches, but also the server running the continuous integration software, and the wider Internet.

    In-drawer architecture: large boards

    A drawer designed for large boards, powered by an ATX power supply contains the following components:

    • Up to four boards
    • Four ATX power-supplies, with their PS_ON# connected to an 8-port relay controller. Only 4 of the 8 ports are used on the relay.
    • One 8-port Ethernet-controlled relay board.
    • One 4-port USB hub, connecting to the serial ports of the four boards.
    • One 8-port Ethernet switch, with 4 ports used to connect to the boards, one port used to connect to the relay board, and one port used for the upstream link.
    • One power strip to power the different components.
    Large drawer example scheme

    Large drawer example scheme

    Large drawer in the lab

    Large drawer in the lab

    In drawer architecture: small boards

    A drawer designed for small boards contains the following components:

    • Up to eight boards
    • One ATX power-supply, with its 5V and 12V rails going through the 8-port relay controller. All ports in the relay are used when 8 boards are present.
    • One 8-port Ethernet-controlled relay board.
    • One 10-port USB hub, connecting to the serial ports of the eight boards.
    • Two 8-port Ethernet switches, connecting the 8 boards, the relay board and an upstream link.
    • One power strip to power the different components.
    Small drawer example scheme

    Small drawer example scheme

    Small drawer in the lab

    Small drawer in the lab

    Server

    At the back of the home made cabinet, a mini PC runs the continuous integration software, that we will discuss in a future blog post. This mini PC is connected to:

    • A main 16-port Gigabit switch, itself connected to all the Gigabit switches in the different drawers
    • A main USB hub, itself connected to all the USB hubs in the different drawers

    As expected, this allows the server to control the power of the different boards, access their serial port, and provide network connectivity.

    Detailed component list

    If you’re interested by the specific components we’ve used for our lab, here is the complete list, with the relevant links:

    Conclusion

    Hopefully, sharing these details about the hardware architecture of our board farm will help others to create a similar automated testing infrastructure. We are of course welcoming feedback on this hardware architecture!

    Stay tuned for our next blog post about the software architecture of our board farm.

    by Quentin Schulz at November 29, 2016 01:31 PM

    November 28, 2016

    ZeptoBARS

    JCST CJ431 : weekend die-shot

    CJ431 is another implementation of 431 shunt voltage reference manufactured by Jiangsu Changjiang Electronics Technology (JCST).
    Die size 620x631 µm.


    November 28, 2016 06:10 AM

    November 27, 2016

    Harald Welte

    Ten years anniversary of Openmoko

    In 2006 I first visited Taiwan. The reason back then was Sean Moss-Pultz contacting me about a new Linux and Free Software based Phone that he wanted to do at FIC in Taiwan. This later became the Neo1973 and the Openmoko project and finally became part of both Free Software as well as smartphone history.

    Ten years later, it might be worth to share a bit of a retrospective.

    It was about building a smartphone before Android or the iPhone existed or even were announced. It was about doing things "right" from a Free Software point of view, with FOSS requirements going all the way down to component selection of each part of the electrical design.

    Of course it was quite crazy in many ways. First of all, it was a bunch of white, long-nosed western guys in Taiwan, starting a company around Linux and Free Software, at a time where that was not really well-perceived in the embedded and consumer electronics world yet.

    It was also crazy in terms of the many cultural 'impedance mismatches', and I think at some point it might even be worth to write a book about the many stories we experienced. The biggest problem here is of course that I wouldn't want to expose any of the companies or people in the many instances something went wrong. So probably it will remain a secret to those present at the time :/

    In any case, it was a great project and definitely one of the most exciting (albeit busy) times in my professional career so far. It was also great that I could involve many friends and FOSS-compatriots from other projects in Openmoko, such as Holger Freyther, Mickey Lauer, Stefan Schmidt, Daniel Willmann, Joachim Steiger, Werner Almesberger, Milosch Meriac and others. I am happy to still work on a daily basis with some of that group, while others have moved on to other areas.

    I think we all had a lot of fun, learned a lot (not only about Taiwan), and were working really hard to get the hardware and software into shape. However, the constantly growing scope, the [for western terms] quite unclear and constantly changing funding/budget situation and the many changes in direction have ultimately lead to missing the market opportunity. At the time the iPhone and later Android entered the market, it was too late for a small crazy Taiwanese group of FOSS-enthusiastic hackers to still have a major impact on the landscape of Smartphones. We tried our best, but in the end, after a lot of hype and publicity, it never was a commercial success.

    What's more sad to me than the lack of commercial success is also the lack of successful free software that resulted. Sure, there were some u-boot and linux kernel drivers that got merged mainline, but none of the three generations of UI stacks (GTK, Qt or EFL based), nor the GSM Modem abstraction gsmd/libgsmd nor middleware (freesmartphone.org) has manage to survive the end of the Openmoko company, despite having deserved to survive.

    Probably the most important part that survived Openmoko was the pioneering spirit of building free software based phones. This spirit has inspired pure volunteer based projects like GTA04/Openphoenux/Tinkerphone, who have achieved extraordinary results - but who are in a very small niche.

    What does this mean in practise? We're stuck with a smartphone world in which we can hardly escape any vendor lock-in. It's virtually impossible in the non-free-software iPhone world, and it's difficult in the Android world. In 2016, we have more Linux based smartphones than ever - yet we have less freedom on them than ever before. Why?

    • the amount of hardware documentation on the processors and chipsets to day is typically less than 10 years ago. Back then, you could still get the full manual for the S3C2410/S3C2440/S3C6410 SoCs. Today, this is not possible for the application processors of any vendor
    • the tighter integration of application processor and baseband processor means that it is no longer possible on most phone designs to have the 'non-free baseband + free application processor' approach that we had at Openmoko. It might still be possible if you designed your own hardware, but it's impossible with any actually existing hardware in the market.
    • Google blurring the line between FOSS and proprietary code in the Android OS. Yes, there's AOSP - but how many features are lacking? And on how many real-world phones can you install it? Particularly with the Google Nexus line being EOL'd? One of the popular exceptions is probably Fairphone2 with it's alternative AOSP operating system, even though that's not the default of what they ship.
    • The many binary-only drivers / blobs, from the graphics stack to wifi to the cellular modem drivers. It's a nightmare and really scary if you look at all of that, e.g. at the binary blob downloads for Fairphone2 to get an idea about all the binary-only blobs on a relatively current Qualcomm SoC based design. That's compressed 70 Megabytes, probably as large as all of the software we had on the Openmoko devices back then...

    So yes, the smartphone world is much more restricted, locked-down and proprietary than it was back in the Openmoko days. If we had been more successful then, that world might be quite different today. It was a lost opportunity to make the world embrace more freedom in terms of software and hardware. Without single-vendor lock-in and proprietary obstacles everywhere.

    by Harald Welte at November 27, 2016 03:00 PM

    November 24, 2016

    Harald Welte

    Open Hardware Multi-Voltage USB UART board released

    During the past 16 years I have been playing a lot with a variety of embedded devices.

    One of the most important tasks for debugging or analyzing embedded devices is usually to get access to the serial console on the UART of the device. That UART is often exposed at whatever logic level the main CPU/SOC/uC is running on. For 5V and 3.3V that is easy, but for ever more and more unusual voltages I always had to build a custom cable or a custom level shifter.

    In 2016, I finally couldn't resist any longer and built a multi-voltage USB UART adapter.

    This board exposes two UARTs at a user-selectable voltage of 1.8, 2.3, 2.5, 2.8, 3.0 or 3.3V. It can also use whatever other logic voltage between 1.8 and 3.3V, if it can source a reference of that voltage from the target embedded board.

    /images/mv-uart-front.jpg

    Rather than just building one for myself, I released the design as open hardware under CC-BY-SA license terms. Full schematics + PCB layout design files are available. For more information see http://osmocom.org/projects/mv-uart/wiki

    In case you don't want to build it from scratch, ready-made machine assembled boards are also made available from http://shop.sysmocom.de/products/multi-voltage-usb-dual-uart

    by Harald Welte at November 24, 2016 11:00 PM

    Open Hardware miniPCIe WWAN modem USB breakout board released

    There are plenty of cellular modems on the market in the mPCIe form factor.

    Playing with such modems is reasonably easy, you can simply insert them in a mPCIe slot of a laptop or an embedded device (soekris, pc-engines or the like).

    However, many of those modems actually export interesting signals like digital PCM audio or UART ports on some of the mPCIe pins, both in standard and in non-standard ways. Those signals are inaccessible in those embedded devices or in your laptop.

    So I built a small break-out board which performs the basic function of exposing the mPCIe USB signals on a USB mini-B socket, providing power supply to the mPCIe modem, offering a SIM card slot at the bottom, and exposing all additional pins of the mPCIe header on a standard 2.54mm pitch header for further experimentation.

    /images/mpcie-breakout-front.jpg

    The design of the board (including schematics and PCB layout design files) is available as open hardware under CC-BY-SA license terms. For more information see http://osmocom.org/projects/mpcie-breakout/wiki

    If you don't want to build your own board, fully assembled and tested boards are available from http://shop.sysmocom.de/products/minipcie-wwan-modem-usb-break-out-board

    by Harald Welte at November 24, 2016 11:00 PM

    November 13, 2016

    ZeptoBARS

    Infineon BCR185W - PNP BJT with bias resistors : weekend die-shot

    Infineon BCR185W is a 0.1A PNP BJT. Bias resisors are 10 kΩ and 47 kΩ according to datasheet.
    Die size 395x285 µm.


    November 13, 2016 05:19 PM

    November 08, 2016

    Free Electrons

    Slides and videos from the Embedded Linux Conference Europe 2016

    Last month, the entire Free Electrons engineering team attended the Embedded Linux Conference Europe in Berlin. The slides and videos of the talks have been posted, including the ones from the seven talks given by Free Electrons engineers:

    • Alexandre Belloni presented on ASoC: Supporting Audio on an Embedded Board, slides and video.
    • Boris Brezillon presented on Modernizing the NAND framework, the big picture, slides and video.
    • Boris Brezillon, together with Richard Weinberger from sigma star, presented on Running UBI/UBIFS on MLC NAND, slides and video.
    • Grégory Clement presented on Your newer ARM64 SoC Linux check list, slides and video.
    • Thomas Petazzoni presented on Anatomy of cross-compilation toolchains, slides and video.
    • Maxime Ripard presented on Supporting the camera interface on the C.H.I.P, slides and video.
    • Quentin Schulz and Antoine Ténart presented on Building a board farm: continuous integration and remote control, slides and video.

    by Thomas Petazzoni at November 08, 2016 03:43 PM

    November 05, 2016

    ZeptoBARS

    DVD photosensor : weekend die-shot

    This is unidentified photo-sensor from DVD-RW drive. Most of the work is done by middle quad - it can receive the signal, track focus (via astigmatic focusing) and follow the track. Additional quads are probably here to improve tracking, they are not used as full quads - there are fewer outputs for left and right quads.

    Die size 1839x1635 µm.



    Closer look at photo-diodes:

    November 05, 2016 08:55 PM

    November 04, 2016

    Village Telco

    MP2 AWD – ‘All Wheel Drive’ Edition

    pan_6The MP2 AWD “All Wheel Drive” edition is now available for order.  The MP2 AWD represents a big step forward for the Mesh Potato.   It is based on the same core as the MP2 Phone and is packaged in an outdoor enclosure with additional features and capabilities, most notably a second radio capable of 2T2R (MIMO) operation on 2.4 and 5GHz bands.  It also has an internal USB port as well as an SD card slot.  This opens up the possibilities for innovation.  The SD slot can host cached content such as World Possible’s Rachel Offline project or any locally important content.  The USB port is available for a variety of uses such as 3G/4G modem for backhaul or backup.

    The MP2 AWD is also easier to deploy than previous models as power, data, and telephony have been integrated into a single ethernet connection thanks to the PoE/TL adaptor that is shipped with the device.  Now both phone, data, and power are all served via a single cable.

    The default user setup for the MP2 AWD is to use the 2.4GHz radio for local hotspot access and the 5GHz radio to create the backbone network on the mesh but it can be configured to suit a variety of scenarios.

    The MP2 AWD has the following features:

    • Everything already included in MP2 Phone including:
      • Atheros AR9331 SoC with a 2.4GHz 802.11n 1×1 router in a single chip
      • Internal antenna for 2.4GHz operation
      • FXS port based on Silicon Labs Si3217x chipset
      • 16/64MB flash/ram memory configuration
      • Two 100Base-T Ethernet ports
      • High-speed UART for console support
    • A second radio module based on the MediaTek/Ralink RT5572 chipset which supports IEEE 802.11bgn 2T2R (2×2 MIMO) operation on 2.4 and 5 GHz bands.
    • Internal SD card slot capable of supporting local content serving, data caching, and general data storage applications.
    • Internal USB port which can be used for a memory device , GSM 3/4G dongle or other USB devices.
    • PoE/TL adaptor which will carry Voice/Data/Power via a single Cat5/6 cable to the MP2 AWD. Similar to a passive PoE connector but also carries voice telephone line connection allowing phone to be plugged in remotely from MP2 AWD

    Available for order now on the Village Telco store.

    by steve at November 04, 2016 07:46 PM

    November 01, 2016

    Bunnie Studios

    NeTV2 Tech Details Live

    Alphamax LLC now has details of the NeTV2 live, including links to preliminary schematics and PCB source files.

    The key features of NeTV2 include:

    • mPCIE v2.0 (5Gbps x1 lane) add-in card format
    • Support for full 1080p60 video
    • Artix-7 FPGA
    • FPGA “hack port” breaking out 3x spare GTP transceiver pairs
    • 512 MB of DDR3-800 @ 32-bit wide memory for frame buffering

    I adopted an add-in card format to allow end users to pick the cost/performance trade-off that suited their application the best. Some users require only a text overlay (NeTV’s original design scenario); but others wanted to blend HD video and 3D graphics, which would require a substantially more powerful and expensive CPU. An add-in card allows users to plug into anything from an economical $60 all-in-one, to a fully loaded gaming machine. The kosagi forum has an open thread for NeTV2 discussion.

    As noted previously, we are currently seeking legal clarity on the suite of planned features for the product, including highly requested features such as alpha blending which require access to the descrambled video stream.

    by bunnie at November 01, 2016 09:35 AM

    October 30, 2016

    Bunnie Studios

    Name that Ware, October 2016

    The Ware for October 2016 is shown below:

    I like this one because not only is it exquisitely engineered, it’s also aesthetically pleasing.

    Sorry for the relative radio silence on the blog — been very heads down the past couple months grinding through several major projects, including my latest book, “The Hardware Hacker”, which is on-track to hit shelves in a couple of months!

    by bunnie at October 30, 2016 04:14 PM

    Winner, Name that Ware September 2016

    The ware for September 2016 is a ColorVision Sypder-series monitor color calibrator.

    Congrats to North-X for naming the ware, email me for your prize!

    by bunnie at October 30, 2016 04:11 PM

    October 29, 2016

    ZeptoBARS

    K140UD2B - Soviet opamp : weekend die-shot

    K140UD2B is an old Soviet opamp without internal frequency compensation. Similar to RCA CA3047T. ICs manufactured in ~1982 have bare die in metal can, ones manufactured in 1988 - have some protective overcoat inside metal can (which is quite unusual).
    Die size 1621x1615 µm.


    October 29, 2016 07:46 PM

    October 28, 2016

    Mirko Vogt, nanl.de

    intel 540s SSD fail

    My intel SSD failed. Hard. As in: its content got wiped. But before getting way too theatrical, let’s stick to the facts first.

    I upgraded my Lenovo ThinkPad X1 Carbon with a bigger SSD in the late summer this year — a 1TB intel 540s (M.2).

    The BIOS of ThinkPads (and probably other brands as well) offer to secure your drive with an ATA password. This feature is part of the ATA specification and was already implemented and used back in the old IDE times (remember the X-BOX 1?).

    With such an ATA password set, all read/write commands to the drive will be ignored until the drive gets unlocked. There’s some discussion about whether ATA passwords should or shouldn’t be used — personally I like the idea of $person not being able to just pull out my drive, modify its unencrypted boot record and put it back into my computer without me noticing.

    In regard of current SSDs the ATA password doesn’t just lock access to the drive but also plays part in the FDE (full disk encryption) featured by modern SSDs — but back to what actually happened…

    As people say, it’s good practice to frequently(TM) change passwords. So I did with my ATA password.

    And then it happened. My data was gone. All of it. I could still access the SSD with the newly set password but it only contained random data. Even the first couple of KB, which were supposed to contain the partition table as well as unencrypted boot code, magically seem to have been replaced with random data. Perfectly random data.

    So, what happened? Back to FDE of recent SSDs: They perform encryption on data written to the drive (decryption on reads, respectively) — no matter if you want it or not.
    Encrypted with a key stored on the device — with no easy way of reading it out (hence no backup). This is happening totally transparently; the computer the device is connected to doesn’t have to care about that at all.

    And the ATA password is used to encrypt the key the actual data on the drive is encrypted with. Password encrypts key encrypts data.

    Back to my case: No data, just garbage. Perfectly random garbage. First idea on what happened, as obvious as devastating: the data on the drive gets read and decrypted with a different key than it initially got written and encrypted with. If that’s indeed the case, my data is gone.

    This behaviour is actually advertised as a feature. intel calls it “Secure Erase“. No need to override your drive dozens of times like in the old days — therewith ensuring the data is irreversible vanished in the end. No, just wipe the key your data is encrypted with and done. And exactly this seems to have happened to me. I am done.

    Fortunately I made backups. Some time ago. Quite some time ago. Of a few directories. Very few. Swearing. Tears. I know, I know, I don’t deserve your sympathies (but I’d still appreciate!).

    Anger! Whose fault is it?! Who to blame?!

    Let’s check the docs on ATA passwords, which appear to be very clear — from the official Lenovo FAQ:

    “Will changing the Master or User hard drive password change the FDE key?”
    – “No. The hard drive passwords have no effect on the encryption key. The passwords can safely be changed without risking loss of data.”

    Not my fault! Yes! Wait, another FAQ entry says:

    “Can the encryption key be changed?”
    – “The encryption key can be regenerated within the BIOS, however, doing so will make all data inaccessible, effectively wiping the drive. To generate a new key, use the option listed under Security -> Disk Encryption HDD in the system BIOS.”

    Double-checking the BIOS if I unintentionally told my BIOS to change the FDE key. No, I wasn’t even able to find such a setting.

    Okay — intermediate result: either buggy BIOS telling my SSD to (re)generate the encryption key (and therewith “Secure Erase” everything on it) or buggy SSD controller, deciding to alter the key at will.

    Google! Nothing. Frightening reports about the disastrous “8MB”-bug on the earlier series 320 devices popped up. But nothing on series 540s.

    If nothing helps and/or there’s nobody to blame: go on Twitter!

    Some Ping-Pong:

    Then…

    Wait, what?! That’s a known issue? I didn’t find a damn thing in the whole internets! Tell me more!

    And to my surprise – they did. For a minute. Shortly before having respected tweets deleted.

    Let’s take a look on what my phone cached:

    The deleted tweets contain a link http://intel.ly/2eRl73j which resolves to https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00055&languageid=en-fr which is an advisory seemingly describing exactly what happened to me:

    “In systems with the SATA devsleep feature enabled, setting or resetting the user or master password by the ATA security feature set may cause data corruption.”

    Later on:

    “Intel became aware of this issue during early customer validation.”

    I guess I just became aware of being part of the “early customer validation”-program. This issue: Personally validated. Check.

    Ok, short recap:

    • intel has a severe bug causing data loss on 540s SSD and – according to the advisory – other series as well
    • intel knows about it (advisory dates to 1st of August)
    • intel doesn’t seem to be eager to spread the word about it
    • affected intel SSDs are sold with the vulnerable firmware version
    • nobody knows a damn thing about it (recall the series 320 issue which was big)

    Meanwhile, I could try to follow up on @lenovo’s tips:

    Sounds good! Maybe, just maybe, that could bring my data back.

    Let’s skip the second link, as it contains a dedicated Windows software I’d love to run, but my Windows installation just got wiped (and I’m not really keen of reinstalling and therewith overriding my precious maybe-still-not-yet-permamently-lost data).

    The first link points to an ISO file. Works for me! Until it crashes. Reproducibly. This ISO reproducibly crashes my Lenovo X1 Carbon 3rd generation. Booting from USB thumb-drive (officially supported it says), as well as from CD. Hm.

    For now I seem to have to conclude with the following questions:

    • Why there’s not I can’t find a damn thing about this bug in the media?
    • Why did intel delete its tweets referencing this bug?
    • Why does the firmware-updater doesn’t do much despite crashing my computer?
    • Why didn’t I do proper backups?!
    • How do I get my data back?!?1ß11

     

    PS: Before I clicked the Publish button I again set up a few search queries. Found my tweets.

    by mirko at October 28, 2016 12:55 AM

    October 24, 2016

    Elphel

    Using a flash with a CMOS image sensor: ERS and GRR modes

    Operation modes in conventional CMOS image sensors with the electronic rolling shutter

    Flash test setup

    Most of the CMOS image sensors have Electronic Rolling Shutter – the images are acquired by scanning line by line. Their strengths and weaknesses are well known and extremely wide usage made the technology somewhat perfect – Andrey might have already said this somewhere before.

    There are CMOS sensors with a Global Shutter BUT (if we take the same optical formats):

    • because of more elements per pixel – they have lower full well capacity and quantum efficiency
    • because analog memory is used – they have higher dark current and higher shutter ratio

    Some links:

    So, the typical sensor with ERS may support 3 modes of operation:

    • Electronic Rolling Shutter (ERS) Continuous
    • Electronic Rolling Shutter (ERS) Snapshot
    • Global Reset Release (GRR) Snapshot

    GRR Snapshot was available in the 10353 cameras but ourselves we never tried it – one should have write directly to the sensor’s register to turn it on. But now it is tested and working in 10393s available through the TRIG (0x14) parameter.

    sensor

    MT9P001 sensor

    Further, I will be writing about ON Semi’s MT9P001 image sensor focusing on snapshot modes. The operation modes are described in the sensor’s datasheet. In short:

    In ERS Snapshot mode (Fig.1,3), exposure time is constant across all rows but each next row’s exposure start is delayed by tROW (row readout time) from the previous one (and so is the exposure end).

    In GRR Snapshot mode (Fig.2,4), the exposure of all rows starts at the same moment but each next row is exposed by tROW longer than the previous one. This mode is good when a flash use is needed.

    The difference between ERS Snapshot and Continuous is that in the latter mode the sensor doesn’t wait for a trigger and starts new image while still finishing reading the previous one. It provides the highest frame rate (Fig.5).

    Fig.1 ERS

    Fig.1 Electronic Rolling Shutter (ERS) Snapshot mode

    Fig.2 GRR

    Fig.2 Global Reset Release (GRR) Snapshot mode

    Fig.3 ERS mode, whole frame

    Fig.3 ERS mode, whole frame

    Fig.4 GRR whole frame

    Fig.4 GRR mode, whole frame

    cmos_sensor_modes

    Fig.5 Sensor operation modes, frame sequence

    Here are some of the actual parameters of MT9P001:

    Parameter Value
    Active pixels 2592h x 1944v
    tROW 33.5 μs
    Frame readout time (Nrows x tROW) 1944 x 33.5 μs ~ 65 ms

    Test setup

    • NC393L-389
    • 9xLEDs
    • Fan (Copal F251R, 25×25 mm, rotating at 5500-8000 RPM)

    The LEDs were powered & controlled by the camera’s external trigger output, the delay and duration of which are programmable.

    The flash duration was set to 20 μs to catch, without the motion blur, the fan’s blades are marked with stickers – 5500-8000 RPM that is 0.5-0.96° per 20 μs. There was not enough light from the LEDs, so the setup is placed in dark environment and the camera color gains were set to 8 (ISO ~800-1000) – the images are a bit noisy.

    The trigger period was set to 250 ms – and the synced LEDs were blinking for each frame.

    The information on how to program the NC393 camera to generate trigger signal, fps, change sensor’s operation modes (ERS/GRR) can be found here.

    Fig.6a Setup: screen, camera view

    Fig.6b Setup: fan

    Fig.6c Setup: fan, camera view

    Flash in ERS mode

    Fig.7a Fig.7b Fig.7c

    In Fig.7a to expose all rows to the flash the exposure needs to be programmed so the 1st row’s end of exposure will exceed the last row’s start of exposure and the flash delayed until the exposure start of the last row. That makes the single row exposure 72ms+tflash.
    Note: there is no ERS effect for moving objects – provided, of course, that the flash is much brighter than the other light sources that will be reducing the contrast during the 72ms frame time.

    In Fig.7b the exposure is shorter than the frame readout time – the flash delay can be any – the result is a brighter band on the image as shown in the example below.

    Another way to expose all rows is to keep the flash on from the 1st row start until the last row end (Fig.7c) – that’s as good as keeping the flash on all the time.

    Example:

    Diagram Screen Fan
    Exposure time, ms 5 20
    Flash duration, ms 0.02 (20μs) 0.02
    Flash delay, ms 40 40
    Comments The fan blades are motion blurred in the rows not affected by the delayed 20μs flash. The flash delay is set so the affected rows appear in the middle of the image. Exposure time defines the width of the bright rows band.

    Flash in GRR mode

    Fig.8 GRR, short exposure, short flash Fig.11 GRR, flash delayed to readout zone
    Fig.8a Fig.8b

    In GRR mode the flash does not need to be delayed and the exposure of the 1st row can be as low as tflash but the last row will be exposed for tflash+72ms (Fig.8a). If the scene is uniformly illuminated the the image tends to be darker in the top and getting brighter in the bottom. GRR is very useful with a flash lamp.
    Note: No ERS effect (as in Fig.7a case).

    Fig.8b just shows what happens if the flash is delayed until frame is read out.

    Examples:

    Diagram Screen Fan
    Exposure time, ms 0.1 0.1
    Flash duration, ms 0.02 0.02
    Flash delay, ms 40 30
    Comments The fan blades are motion blurred in the rows not affected by the delayed 20μs flash. All of the rows not read out before the flash are affected.

     

    Diagram Screen Fan
    Exposure time, ms 0.1 0.1
    Flash duration, ms 0.02 0.02
    Flash delay, ms 0 0
    Comments Fan is rotating. No motion blur. In GRR if flash is not delayed the whole image is affected by the flash. Brighter environment = lower contrast.

     

    Diagram Screen Fan
    Fig.9 GRR, long exposure, short flash Fig.22 Fig.19
    Exposure time, ms 5 10
    Flash duration, ms 0.02 0.02
    Flash delay, ms 0 0
    Comments Fan is rotating. 100 times longer exposure compared to the previous example – the environment is relatively dark.

    Conclusions

    • ERS Continuous – max fps, constant exposure, not synced
    • ERS Snapshot – constant exposure, synced
    • GRR Snapshot – synced, use this mode with flash

    Links

    by Oleg Dzhimiev at October 24, 2016 11:56 PM

    October 20, 2016

    ZeptoBARS

    National Semiconductor LM330 - first LDO (1976) : weekend die-shot

    LM330/LM2930 (LM130) is the first LDO linear regulator manufactured by National Semiconductor since 1976.
    Die size 1723x1490 µm.



    On a different die one can see funny litho/processing defect over power transistor (especially at full resolution):


    Wafer also had few test chips:


    That was on 3" wafer:


    Thanks for the original wafers to Bob Miller, one of the designers of this chip.

    October 20, 2016 07:44 AM

    October 19, 2016

    Free Electrons

    Support for Device Tree overlays in U-Boot and libfdt

    C.H.I.PWe have been working for almost two years now on the C.H.I.P platform from Nextthing Co.. One of the characteristics of this platform is that it provides an expansion headers, which allows to connect expansion boards also called DIPs in the CHIP community.

    In a manner similar to what is done for the BeagleBone capes, it quickly became clear that we should be using Device Tree overlays to describe the hardware available on those expansion boards. Thanks to the feedback from the Beagleboard community (especially David Anders, Pantelis Antoniou and Matt Porter), we designed a very nice mechanism for run-time detection of the DIPs connected to the platform, based on an EEPROM available in each DIP and connected through the 1-wire bus. This EEPROM allows the system running on the CHIP to detect which DIPs are connected to the system at boot time. Our engineer Antoine Ténart worked on a prototype Linux driver to detect the connected DIPs and load the associated Device Tree overlay. Antoine’s work was even presented at the Embedded Linux Conference, in April 2016: one can see the slides and video of Antoine’s talk.

    However, it turned out that this Linux driver had a few limitations. Because the driver relies on Device Tree overlays stored as files in the root filesystem, such overlays can only be loaded fairly late in the boot process. This wasn’t working very well with storage devices or for DRM that doesn’t allow hotplug of some components. Therefore, this solution wasn’t working well for the display-related DIPs provided for the CHIP: the VGA and HDMI DIP.

    The answer to that was to apply those Device Tree overlays earlier, in the bootloader, so that Linux wouldn’t have to deal with them. Since we’re using U-Boot on the CHIP, we made a first implementation that we submitted back in April. The review process took its place, it was eventually merged and appeared in U-Boot 2016.09.

    List of relevant commits in U-Boot:

    However, the U-Boot community also requested that the changes should also be merged in the upstream libfdt, which is hosted as part of dtc, the device tree compiler.

    Following this suggestion, Free Electrons engineer Maxime Ripard has been working on merging those changes in the upstream libfdt. He sent a number of iterations, which received very good feedback from dtc maintainer David Gibson. And it finally came to a conclusion early October, when David merged the seventh iteration of those patches in the dtc repository. It should therefore hopefully be part of the next dtc/libfdt release.

    List of relevant commits in the Device Tree compiler:

    Since the libfdt is used by a number of other projects (like Barebox, or even Linux itself), all of them will gain the ability to apply device tree overlays when they will upgrade their version. People from the BeagleBone and the Raspberry Pi communities have already expressed interest in using this work, so hopefully, this will turn into something that will be available on all the major ARM platforms.

    by Maxime Ripard at October 19, 2016 08:47 PM

    October 04, 2016

    Free Electrons

    A Kickstarter for a low cost Marvell ARM64 board

    At the beginning of October a Kickstarter campaign was launched to fund the development of a low-cost board based on one of the latest Marvell ARM 64-bit SoC: the Armada 3700. While being under $50, the board would allow using most of the Armada 3700 features:

    • Gigabit Ethernet
    • SATA
    • USB 3.0
    • miniPCIe

    ESPRESSObin interfaces

    The Kickstarter campaign was started by Globalscale Technologies, who has already produced numerous Marvell boards in the past: the Armada 370 based Mirabox, the Kirkwood based SheevaPlug, DreamPlug and more.

    We pushed the initial support of this SoC to the mainline Linux kernel 6 months ago, and it landed in Linux 4.6. There are still a number of hardware features that are not yet supported in the mainline kernel, but we are actively working on it. As an example, support for the PCIe controller was merged in Linux 4.8, released last Sunday. According to the Kickstarter page the first boards would be delivered in January 2017 and by this time we hope to have managed to push more support for this SoC to the mainline Linux kernel.

    We have been working on the mainline support of the Marvell SoC for 4 years and we are glad to see at last the first board under $50 using this SoC. We hope it will help expanding the open source community around this SoC family and will bring more contributions to the Marvell EBU SoCs.

    by Gregory Clement at October 04, 2016 09:36 AM

    October 03, 2016

    Free Electrons

    Linux 4.8 released, Free Electrons contributions

    Adelie PenguinLinux 4.8 has been released on Sunday by Linus Torvalds, with numerous new features and improvements that have been described in details on LWN: part 1, part 2 and part 3. KernelNewbies also has an updated page on the 4.8 release. We contributed a total of 153 patches to this release. LWN also published some statistics about this development cycle.

    Our most significant contributions:

    • Boris Brezillon improved the Rockchip PWM driver to avoid glitches basing that work on his previous improvement to the PWM subsystem already merged in the kernel. He also fixed a few issues and shortcomings in the pwm regulator driver. This is finishing his work on the Rockchip based Chromebook platforms where a PWM is used for a regulator.
    • While working on the driver for the sii902x HDMI transceiver, Boris Brezillon did a cleanup of many DRM drivers. Those drivers were open coding the encoder selection. This is now done in the core DRM subsystem.
    • On the support of Atmel platforms
      • Alexandre Belloni cleaned up the existing board device trees, removing unused clock definitions and starting to remove warnings when compiling with the Device Tree Compiler (dtc).
    • On the support of Allwinner platforms
      • Maxime Ripard contributed a brand new infrastructure, named sunxi-ng, to manage the clocks of the Allwinner platforms, fixing shortcomings of the Device Tree representation used by the existing implementation. He moved the support of the Allwinner H3 clocks to this new infrastructure.
      • Maxime also developed a driver for the Allwinner A10 Digital Audio controller, bringing audio support to this platform.
      • Boris Brezillon improved the Allwinner NAND controller driver to support DMA assisted operations, which brings a very nice speed-up to throughput on platforms using NAND flashes as the storage, which is the case of Nextthing’s C.H.I.P.
      • Quentin Schulz added support for the Allwinner R16 EVB (Parrot) board.
    • On the support of Marvell platforms
      • Grégory Clément added multiple clock definitions for the Armada 37xx series of SoCs.
      • He also corrected a few issues with the I/O coherency on some Marvell SoCs
      • Romain Perier worked on the Marvell CESA cryptography driver, bringing significant performance improvements, especially for dmcrypt usage. This driver is used on numerous Marvell platforms: Orion, Kirkwood, Armada 370, XP, 375 and 38x.
      • Thomas Petazzoni submitted a driver for the Aardvark PCI host controller present in the Armada 3700, enabling PCI support for this platform.
      • Thomas also added a driver for the new XOR engine found in the Armada 7K and Armada 8K families

    Here are in details, the different contributions we made to this release:

    by Alexandre Belloni at October 03, 2016 12:12 PM

    Elphel

    Elphel presenting at ORCONF 2016, An open source digital design conference

    On October 8th, 2016 Andrey will be presenting his work on VDT – Free Software Environment for FPGA Development at an open source digital design conference, ORCONF 2016. ORCONF 2016

     

    The conference will take place in Bologna, Italy, and we are glad for the possibility to meet some of European users of Elphel cameras, and to connect with the community of developers excited about open source design, free software and open hardware.

    Elphel will be present at the conference by Andrey Filippov from USA headquarters and Alexadre Poltorak, founder of Swiss 3D4Pi mobile mapping company, working closely with Elphel to integrate Eyesis4Pi, stereophotogrammetric camera, for the purpose of image based 3D reconstruction applications. Andrey will bring and demonstrate the new multisensor NC393 H-camera and Alexandre plans to take some panoramic footage with Eyesis4Pi camera, while in Bologna.

    by olga at October 03, 2016 05:12 AM

    September 30, 2016

    Free Electrons

    Free Electrons at the X.org Developer Conference 2016

    The X.org Foundation hosts every year around september the X.org Developer Conference, which, unlike its name states, is not limited to X.org developers, but gathers all the Linux graphics stack developers, including X.org, Mesa, wayland, and other graphics stacks like ChromeOS, Android or Tizen.

    This year’s edition was held last week in the University of Haaga-Helia, in Helsinki. At Free Electrons, we’ve had more and more developments on the graphic stack recently through the work we do on Atmel and NextThing Co’s C.H.I.P., so it made sense to attend.

    XDC 2016 conference

    There’s been a lot of very interesting talks during those three days, as can be seen in the conference schedule, but we especially liked a few of those:

    DRM HWComposer – SlidesVideo

    The opening talk was made by two Google engineers from the ChromeOS team, Sean Paul and Zach Reizner. They talked about the work they did on the drm_hwcomposer they wrote for the Pixel C, on Android.

    The hwcomposer is one of the HAL in Android that interfaces between Surface Flinger, the display manager, and the underlying display driver. It aims at providing hardware composition features, so that Android can leverage the capacities of the display engine to perform compositions (through planes and sprites), without having to use the CPU or the GPU to do this work.

    The drm_hwcomposer started out as yet another hwcomposer library implementation for the tegra-drm driver in Linux. While they implemented it, it turned into some generic enough implementation that should be useful for all the DRM drivers out there, and they even introduced some particularly nice features, to split the final screen content into several planes based on the actual displayed content rather than on windows like it’s usually done.

    Their work also helped to point out a few flaws in the hwcomposer API, that will eventually be fixed in a new revision of that API.

    ARC++ SlidesVideo

    The next talk was once again from a ChromeOS engineer, David Reveman, who came to show his work on ARC++, the component in ChromeOS that allows to run Android applications. He was obviously mostly talking about the display side.

    In order to achieve that, he had to implement an hwcomposer that would just act as a proxy between SurfaceFlinger and Wayland that is used on the ChromeOS side. The GL rendering is still direct though, and each Android application will talk directly to the GPU, as usual. Only the composition will be forwarded to the ChromeOS side.

    In order to minimize that composition process, whenever possible, ARC++ tries to back each application with an overlay so that the composition would happen directly in hardware.

    This also led to some interesting challenges, especially since some of the assumptions of both systems are in contradiction. For example, any application can be resized in ChromeOS, while it’s not really a thing in Android where all the applications run full screen.

    HDR Displays in Linux – SlidesVideo

    The next talk we found interesting was Andy Ritger from nVidia explaining how the HDR displays were supposed to be handled in Linux.

    He first started by explaining what HDR is exactly. While the HDR is just about having a wider range of luminance than on a regular display, you often also get a wider gamut with HDR capable displays. This means that on those screens you can display a wider range of colors, and with a better range and precision in their intensity. And
    while the applications have been able to generate HDR content for more than 10 years, the rest of the display stack wasn’t really ready, meaning that you had convert the HDR colors to colors that your monitor was able to display, using a technique called tone mapping.

    He then explained than the standard, non-HDR colorspace, sRGB, is not a linear colorspace. This means than by doubling the encoded luminance of a color, you will not get a color twice brighter on your display. This was meant this way because the human eye is much more sensitive to the various shades of colors when they are dark than when they are bright. Which essentially means that the darker the color is, the more precision you want to get.

    However, the luminance “resolution” on the HDR display is so good that you actually don’t need that anymore, and you can have a linear colorspace, which is in our case SCRGB.

    But drawing blindly in all your applications in SCRGB is obviously not a good solution either. You have to make sure that your screen supports it (which is exposed through its EDIDs), but also that you actually tell your screeen to switch to it (through the infoframes). And that requires some support in the kernel drivers.

    The Anatomy of a Vulkan Driver – SlidesVideo

    This talk by Jason Ekstrand was some kind of a war story of the bring up Intel did of a Vulkan implementation on their GPU.

    He first started by saying that it was actually a not so long project, especially when you consider that they wrote it from scratch, since it took roughly 3 full-time engineers 8 months to come up with a fully compliant and open source stack.

    He then explained why Vulkan was needed. While OpenGL did amazingly well to cope with the hardware evolutions, it was still designed over 20 years ago, This proved to have some core characteristics that are not really relevant any more, and are holding the application developers back. For example, he mentioned that at its core, OpenGL is based on a singleton-based state machine, that obviously doesn’t scale well anymore on our SMP systems. He also mentioned that it was too abstracted, and people just wanted a lower level API, or that you might want to render things off screen without X or any context.

    This was fixed in Vulkan by effectively removing the state machine, which allows it to scale, push things like the error checking or the synchronization directly to the applications, making the implementation much simpler and less layered which also simplifies the development and debugging.

    He then went on to discuss how we could share the code that was still shared between the two implementations, like implementing OpenGL on top of Vulkan (which was discarded), having some kind of lighter intermediate language in Mesa to replace Gallium or just sharing through a library the common bits and making both the OpenGL and Vulkan libraries use that.

    Motivating preemptive GPU scheduling for real-time systems – SlidesVideo

    The last talk that we want to mention is the talk on preemptive scheduling by Roy Spliet, from the University of Cambridge.

    More and more industries, and especially the automotive industry, offload some computations to the GPU for example to implement computer vision. This is then used in a car to implement the autonomous driving to make the car recognize signs or stay in its lane. And obviously, this kind of computations are supposed to be handled in a real time
    system, since you probably don’t want your shiny user interface for the heating to make your car crash in the car before it because its rendering was taking too long.

    He first started to explain what real time means, and what the usual metrics are, which should to no surprise to people used to “CPU based” real time systems: latency, deadline, execution time, and so on.

    He then showed a bunch of benchmarks he used to test his preemptive scheduler, in a workload that was basically running OpenArena while running some computations, on various nouveau based platforms (both desktop-grade GPUs, and embedded SoCs).

    This led to some expected conclusions, like the fact that a preemptive scheduler is indeed adding some overhead, but is on average worth it, while some have been quite interesting. He was for example observing some worst case latencies that were quite rare (0.3%), but were actually interferences from the display engine filling up its empty FIFOs, and creating some contention on the memory bus.

    Conclusion

    Overall, this has been a great experience. The organisation was flawless, and the one-track-only format allows you to meet easily both the speakers and attendees. The content was also highly technical, as you might expect, which made us learn a lot and led us to think about some interesting developments we could do on our various projects in the future, such as NextThing Co’s CHIP.

    by Maxime Ripard at September 30, 2016 08:44 AM

    Altus Metrum

    Second Retirement

    At the end of August 2012, I announced my Early Retirement from HP. Two years later, my friend and former boss Martin Fink successfully recruited me to return to what later became Hewlett Packard Enterprise, as an HPE Fellow working on open source strategy in his Office of the CTO.

    I'm proud of what I was was able to accomplish in the 25 months since then, but recent efforts to "simplify" HPE actually made things complicated for me. Between the announcement in late June that Martin intended to retire himself, and the two major spin-merger announcements involving Enterprise Services and Software... well...

    The bottom line is that today, 30 September 2016, is my last day at HPE.

    My plan is to "return to retirement" and work on some fun projects with my wife now that we are "empty nesters". I do intend to remain involved in the Free Software and open hardware worlds, but whether that might eventually involve further employment is something I'm going to try and avoid thinking about for a while...

    There is a rocket launch scheduled nearby this weekend, after all!

    by bdale's rocket blog at September 30, 2016 04:23 AM

    September 26, 2016

    Village Telco

    SECN 4.0 Firmware Available

    mp2_phone_resetThe fourth release of the Small Enterprise / Campus Network (SECN) firmware for MP02, Ubiquity and TP Link devices, designed to provide combined telephony and data network solutions is now available for download.

    The major features of this update are:

    • Updated OpenWrt version to Chaos Calmer version
    • Updated stable batman-adv  mesh software to version 2016.1
    • Added factory restore function from Hardware Reset button

    Unless you are running a network with some of the first generation Mesh Potatoes, you should consider upgrading to this firmware.   The new factory reset function is particularly handy in that any device can be reset to its factory firmware settings by holding down the reset button for 15 seconds.

    Stable firmware is available here:

    MP02 –  http://download.villagetelco.org/firmware/secn/stable/mp-02/SECN_4/
    TP-Link – http://download.villagetelco.org/firmware/secn/stable/tp-link/SECN_4/
    Ubiquiti – http://download.villagetelco.org/firmware/secn/stable/ubnt/SECN_4/

    Please subscribe to the Village Telco community development list if you have questions or suggestions.

    by steve at September 26, 2016 04:25 PM

    September 25, 2016

    Bunnie Studios

    Name that Ware, September 2016

    The Ware for September 2016 is shown below.

    Thanks to J. Peterson for sharing this ware!

    by bunnie at September 25, 2016 09:45 AM

    Winner, Name that Ware August 2016

    After reading through the extensive comments on August’s ware, I’m not convinced anyone has conclusively identified the ware. I did crack a grin at atomicthumbs’ suggestion that this was a “mainboard from a Mrs. Butterworth’s Syrup of Things sensor platform”, but I think I’ll give the prize (please email me to claim it) once again to Christian Vogel for his thoughtful analysis of the circuitry, and possibly correct guess that this might be an old school laser barcode scanner.

    The ware is difficult to evaluate due to the lack of a key component — whatever it is that mounts into the pin sockets and interacts with the coil or transformer near the hole in the center of the circuit board. My feeling is the placement of that magnetic device is not accidental.

    A little bit of poking around revealed this short Youtube video which purports to demonstrate an old-school laser barcode mechanism. Significantly, it has a coil of similar shape and orientation to that of this ware, as well as three trimpots, although that could be a coincidence. Either way, thanks everyone for the entertaining and thoughtful comments!

    by bunnie at September 25, 2016 09:45 AM

    September 19, 2016

    Elphel

    NC393 development progress and the future plans

    Since we started to deliver first NC393 series cameras in May we were working on the cameras software – original version was rather limited. While it was capable of serving images/video over the network and recording them on the internal m.2 SSD, it did not have the advanced image acquisition control (through the GUI and programmatically) that was standard for the earlier NC353 series. Now the core functionality is operational and in a month we plan to have the remaining parts (inter-camera synchronization, working with multiple sensors per-port with 10359 multiplexer, GPS+IMU logging) online too. FPGA code is already ported, but it needs to be tested and a fair amount of troubleshooting, identifying the problems and weeding out the bugs is still left to be done.

    Fig 1. Four camvc instances for four channels of NC393 camera

    Fig 1. Four camvc instances for the four channels of NC393 camera

    Users of earlier Elphel cameras can easily recognize familiar camvc web interface – Fig. 1 shows a screenshot of the four instances of this interface controlling 4 sensors of NC393 camera in “H” configuration.

    This web application tests multiple underlaying pieces of software in the camera: FPGA code, Linux kernel drivers that control the low level of the camera operation and are handling 8 interrupts from the imaging subsystem (NC353 camera processor had just one), PHP extension to interact with the drivers, image server, histograms visualization program, autoexposure and white balance daemons as well as multiple PHP scripts and Javascript code. Luckily, the higher the level, the less changes we needed in the code from the NC353 (in most cases just a single new parameter – sensor port had to be introduced), but the debugging process included going through all the levels of code – bug chasing could start from Javascript code, go to PHP code, then to PHP extension, to kernel driver, direct FPGA control from the Python code (bypassing drivers), simulating Verilog code with Cocotb. Then, when the problem was identified and the HDL code corrected (it usually required several more iterations with simulation), the top level programs were tested again with the new FPGA bitstream. And this is the time when the integration of all the development in the same Eclipse IDE is really paying off – easy code navigation, making changes to different language programs – and the software was rebuilding and transferring the results to the target system automatically.

    Camera core software

    NC393 camera software aims the same goals as the previous models – allow the full speed operation of the imagers while minimizing real-time requirements to the software on the two levels:

    • kernel level (tolerate large delays when waiting for the interrupts to be served) and
    • application level – allow even scripting languages to keep up with the hardware

    Interrupt latency is usually not a problem what working with full frame multi-megapixel images, but the camera can operate a small window at high FPS too. Many operations with the sensor (like changing resolution or image size) require coordinated updating sensor internal registers (usually over I²C connection), changing parameters of the sensor-to-memory FPGA channel (with appropriate latency), parameters of the memory-to-compressor channel, and parameters of the compressor itself. Additionally the camera software should provide the modified image headers (reflecting the new window size) when the acquired image will be recorded or requested over the network.

    Application software just needs to tell when (at what frame number) it needs the new window size and the kernel plus FPGA code will take care of the rest. Slow software should just tell in advance so the camera code and the sensor itself will have enough time to execute the request. Multiple parameters modifications designated for a specific frame will be applied almost simultaneously even if frame sync pulses where received from the sensor while application was sending the new data.

    Image-derived data remains available long after the image is acquired

    Similar things happen with the data received from the sensor – image itself and histograms (they are used for the automatic exposure adjustment and white balancing). Application does not need to to read them before the next frame data arrives – compressed images are kept in a large (64MB per port) ring buffer in the system memory – it can keep record of several seconds of images. Histograms (for up to 4 different windows inside the full image for each sensor port) are preserved for 15 frames after being acquired and transferred over DMA to the system memory. Subset of essential acquisition parameters and image metadata (needed for Exif output) are preserved for 2048 and 511 frames respectively.

    Fig 2. Interaction of the image sensor, FPGA, kernel drivers and user space applications

    Fig 2. Interaction of the image sensor, FPGA, kernel drivers and user space applications

    FPGA frame-based command sequencers

    There are 2 sequencers for each of the four sensor ports on the FPGA level – they do not use any of the CPU resources:

    • I²C sequencers handle relatively slow i2c commands to be sent to the senor, usually these commands need to arrive before start of the next frame,
    • Command sequencers perform writes to the memory-mapped registers and so control the FPGA operation. These operations need to happen in guaranteed time just after the start of frame, before the corresponding subsystems begin to process the incoming image data.

    Both are synchronized by the “start of frame” signals from the sensor, each sequencer has 16 frame pages, each page contains 64 command slots.

    Sequencers allow absolute (modulo 16) frame address and relative (to current) frame address. Writing to the current frame (zero offset) is interpreted as “ASAP” and the commands are issued immediately, not synchronized by the start of frame. Additionally, if the commands were written too late and the frame sync arrived before they were executed, they will still be processed before the next frame slot page is activated.

    Kernel support of the image frame parameters

    There are many frame-related parameters that control image acquisition in the camera, including various sensor register settings, parameters that control gamma conversion, image format for recording to video memory (dedicated to FPGA DDR3 not shared with the CPU), compressor format, signal gains, color saturations, compression quality, coring parameters, histogram windows size and position. There is no such thing as the “current frame parameters” in the camera, at any given moment the sensor may be programmed for a certain image size, while its output data reflects the previous frame format, and the compressor is still not finished with even earlier image. That means that the camera should be aware of multiple sets of the same parameters, each applicable to a certain frame (identified by an absolute frame number). In that case the sensor “now” is receiving not the “current” frame parameters, but the frame parameters of a frame that will happen 2 frame intervals later.

    Current implementation keeps parameters (all parameters are unsigned long) in a 16-element ring buffer, each element being a

    /** Parameters block, maintained for each frame (0..15 in NC393) of each sensor channel */
    struct framepars_t {
            unsigned long pars[927];      ///< parameter values (indexed by P_* constants)
            unsigned long functions;      ///< each bit specifies function to be executed (triggered by some parameters change)
            unsigned long modsince[31];   ///< parameters modified after this frame - each bit corresponds to one element in in par[960] (bit 31 is not used)
            unsigned long modsince32;     ///< parameters modified after this frame super index - non-zero elements in in mod[31]  (bit 31 is not used)
            unsigned long mod[31];        ///< modified parameters - each bit corresponds to one element in in par[960] (bit 31 is not used)
            unsigned long mod32;          ///< super index - non-zero elements in in mod[31]  (bit 31 is not used)
    };

    Interrupt driven processing of the parameters take CPU time (in contrast with the FPGA sequencers described before), so the processing should be efficient and not iterate through almost a thousand entries for each interrupt, It is also not practical to copy a full set of parameters from the previous frame. Parameters structure for each frame include mod[31] array where each element stores a bit field that describes modification of the 32 consecutive parameters, and a single mod32 represents each mod as a single bit. So mod32 == 0 means that there were no changes (as is true for the majority of frames) and there is nothing to do for the interrupt service routine. Additional fields modsince[31] and modsince32 mean that there were changes to the parameter after this frame. It is used to initialize a new (15 frames ahead of “now”) frame entry in the ring buffer. The buffer is modulo 16, so parameters for [this_frame + 15] share the same memory address as [this_frame-1], and if the parameter is not “modified since” (as is true for the majority of parameters), nothing has to be done for it when advancing this_frame.

    There is a configurable parameter that tells parameter processing at interrupts how far to look ahead in the future (Fig.2 shows frames that are too far in the future hatched). The function starts with the current frame and proceeds in the future (up to the specified limit) looking for modified, but not yet processed parameters. Processing of the modified parameters involves calling of up to 32 “generic”(sensor-agnostic) functions and up to 32 their sensor-specific variants. Each parameter that triggers some action if modified is assigned a bitmask of functions to schedule on change, and when the parameter is written to buffer, the functions field for the frame is OR-ed, so during the interrupt only this single field has to be considered.

    Processing parameters in a frame scans all the bits in functions (in defined order, starting from the LSB, generic first), the functions involve verification and calculation of derivative values, writing data to the FPGA command and I²C sequencers (deep green and blue on Fig. 2 show the new added commands to the sequencers). Additionally some actions may schedule other parameters changes to be processed at later frame.

    User space applications and the frame parameters

    Application see frame parameters through the character device driver that supports write, mmap, and (overloaded) lseek.

    • write operation allows to set a list of parameters and apply these changes to a particular frame as a single transaction
    • mmap provides read access to all the frame parameters for up to 15 frames in the future, parameter defines are provided through the header files under kernel include/uapi, so applications (such as PHP extension) can access them by symbolic names.
    • lseek is heavily overloaded, especially for positive offsets to SEEK_END – such commands initiate special actions in this driver, such as waiting for the specific frame. It is partially used instead of the ioctl command, because lseek is immediately supported in most languages while ioctl often requires special extensions.

    Communicating image data to the user space

    Similar to handling of the frame acquisition and processing parameters, that deals with the future and lets even slow applications to control the process being frame-accurate, other kernel drivers use the FPGA code features to give applications sufficient time to process acquired data before it is overwritten by the newer one. These drivers use similar character device interface with mmap for data access and lseek for control, some use write to send data to the driver.

    • circbuf driver provides access to the compressed image data in any of the four 64MB ring buffers that contain compressed by the FPGA data (FPGA also provides the microsecond-accurate timestmap and the image size). Each image is 32-byte aligned, FPGA skips additional 32 bytes after each frame. Compressor interrupt service routine (located in sensor_common.c) fills this area with some of the image acquisition metadata.
    • histograms driver handles the histograms for the acquired images. Histograms are calculated in the FPGA on the image-to-memory path and so are active even if compressor is stopped. There are 3 types of histogram data that may be needed by the applications, and only the first one (direct) is provided by the FPGA over DMA, two others (derivative) are calculated in the driver and cached, so application request for the same derivative histogram does not require re-calculation. Histograms are calculated for the pixels after gamma-conversion even if raw (2 bytes/pixel) data is recorded, so table indices are always in the range of 0 to 255.
      • direct histograms are provided by the FPGA that maintains data for 16 consecutive (last acquired) frames, for each of the 4 color channels (2 separate green ones), for each of the sensor ports and sub-channels (when multiplexers are used). Each frame data contain 256*4=1024 of the unsigned long (32 bit) values.
      • cumulative histograms contain the corresponding cumulative values, each element equals to sum of the direct histogram values from 0 to the specified index. When divided by the value at index 255 (total number of pixel of this color channel =1/4 of all pixels in WOI) the result will tell what part of all pixels have values less or equal to the current.
      • percentiles are reversed cumulative histograms, they tell what is the pixel level for which a certain fraction of all pixels has a value of equal or below it. These values refer to non-linear (gamma-converted) pixel values, so automatic exposure also uses reversed gamma tables and does interpolation between the two values in the percentile table.
    • jpeghead driver generates JPEG/JP4 headers that need to be concatenated with the compressed output from circbuf (and with the end-of-image 0xff/0xd9 marker) to make a complete image file
    • exif driver manipulates Exif data in the camera – it stores Exif frame-variable data for the last acquired frames in a 512-element ring buffer, allows to specify and set additional Exif fields, provides mmap read access to the metadata.

    Camera applications

    Current applications include

    • Elphel PHP extension allows multiple PHP scripts to work in the camera, providing server-side of the web applications functionality, such as camvc.
    • imgsrv is a fast image server that bypasses camera web server and transfers images and metadata avoiding any copying of extra data – network controller sends data over DMA from the same buffer where FPGA delivered compressed data (also over DMA). Each sensor port has a corresponding instance of imgsrv, serving different network ports.
    • camogm allows simultaneous recording image data from multiple channels at up to 220 MB/s
    • autoexposure is an auto exposure and white balance daemon that uses image histograms for the specified WOI to adjust exposure time, sensor analog gains and signal gain coefficients in the FPGA.
    • pnghist is a CGI program that visualizes histograms as PNG images, it supports several histogram presentation modes.

    Other applications that were available in the earlier NC353 series cameras (such as RTP/RTSP video streamer) will be ported shortly.

    Future plans

    NC393 camera has 12 times higher performance than the earlier NC353 series, and porting of the functionality of the NC353 is much more than just tweaking of the FPGA code and the drivers – large portions had to be redesigned completely. Camera FPGA project includes provisions for advanced image processing, and that changed the foundation of the camera code. That said, it is much more exciting to move forward and implement functionality that did not exist before, but we had to finish that “boring” part first. And as now it is coming closer, I would like to share our future development plans and invite others who may be interested to cooperate.

    New sensors

    NC393 was designed to have maximal flexibility in the sensor interface – this we learned from our experience with 303-313-333-353 series of cameras. So far NC393 is tested with one parallel interface sensor and one with a 4-lane HiSPI interface (both have links to the circuit diagrams). Each port can use 8 lanes+clock (9 differential) pairs and several more control/clock signals. Larger/faster sensors may use multiple sensors ports and so multiply available interface lines.
    It will be interesting to try high sensitivity large pixel E2V sensors and ToF technology. TI OPT8241 seems to be a good fit for NC393, but OPT8241 I²C register map is not provided.

    Quadcopters flying Star Wars style

    Most quadcopters use brushless DC motors (BLDC) that maybe tricky to control. Integrated motor controllers that detect rotor position using the voltage on the power coils or external sensors (and so emulate ancient physical brushes) work fine when you apply only moderate variations to the rotation speed but may fail if you need to change the output fast and in precisely calculated manner. FPGA can handle such calculations better and leave CPU resources for the high level tasks. I would imagine such motor control to include some tiny FPGA paired with the high-current MOSFET drivers attached to the motors. Then use lightweight SATA cables (such as 3m 5602 series) to connect them to the NC393 daughter board. NC393 already has dual ARM CPU so it can use existing free software to fly drones and take video/images at the same time. Making it not just fly, but do “tricks” will be really exciting.

    Image processing and High Level Synthesis (HLS) alternative

    NC393 FPGA design started around a 16-channel memory access optimized for 2d data. Common memory may be not the most modern approach to parallel processing, but when the bulk memory (0.5GB of the DDR3) is a single device, it has to be shared between the channels and not all the module connection can be converted to simple stream protocols. Even before we started to add image processing, we have to maintain two separate bitstreams – one for the parallel sensors, and the other – for HiSPI (serial) ones. They can not be made run-time programmable as even the voltage levels are different, to say nothing that both interfaces together will not fit into Zynq FPGA – we already balancing around 80% of the slice utilization. Theoretically NC393 can use two of the parallel and 2 serial sensors (two pairs of sensor ports use two separate I/O banks with individually programmable supply voltage), but that adds even more variants to the top level module configuration and matching constraints files, and makes the code less readable.

    Things will get even more complicated when there will be more active memory channels involved in the processing, especially when the inter-synchronization of the different modules processing multi-sensor 2d data is more complex than just stream in/stream out.

    When processing muti-view scenes we will start with de-warping followed by FFT to implement correlation between the 4 simultaneous images and so significantly reduce ambiguity of a stereo-pair correlation. In parallel with working on Verilog code for the new modules I plan to try to reduce the complexity of the inter-module connections, making it more flexible and easier to maintain. I would love to use something higher level, but unfortunately there is nothing for me to embrace and use.

    Why I do not believe in HLS

    Focusing on the algorithmic level and leaving RTL implementation to the software is definitely a good idea, but the task is much more ambitious than to try to replace GCC or GNU/Linux operating system that even most proprietary and encryption-loving companies have to use. The gap between the algorithms and RTL code is wider than between the C code and the Assembler for the CPU, regardless of some nice demos with the Sobel filter applied to the live video stream or similar simple processing.

    One of the major handicaps of the existing approach is an obsession with making modern reprogrammable FPGA code mimic the fixed-function hardware integrated circuits popular in the last century. To be software-like is much more powerful than to look like some old hardware. It is sure that separation of the application levels, use of the standard APIs are important, but it is most beneficial in the mature areas. In the new ones I consider it to be a beauty of coding to be able to freely cross any implementation levels, break some good programming practices, adjust it here and there, redesign and start over, balance overall performance and structure to create something new. Features and interfaces freeze will come later.

    So what to use instead?

    I do not yet know what it should be exactly, but I would borrow Python decorators and functionality of Verilog generate operators. Instead of just instantiating “black boxes” with rigid interfaces – allow the wrapper code (both automatically generated and hand-crafted) to get inside the instantiated modules code and modify it for the particular instances. “Decoration” meaning generation of the modified module code for the specific instances. Something like programmatic parametrization (modifying code, not just the parameter values, even those that direct generate operators).

    Elphel FPGA code is source code based, there are zero of the “black boxes” in the design. And as all the code (109579 lines of it) is available it is accessible for the software too, and “robots” can analyze it and make it easier to manage. We would like to have them as “helpers” not as “wizards” who can offer just a few choices among the pre-programmed options.

    To some extend we already do have such “helpers” – our current Python code “understands” Verilog parameter definitions in the source code, including some calculations of the derivative ones. That makes it possible for the Python programs running in the camera to use the same register addresses and bit fields as defined for the FPGA code implemented in the current bitstream.

    When the cameras became capable of running FPGA code controlled by the Python program and we were ready to develop kernel drivers, we added extra functionality to the existing Python code. Now it is able not just to read Verilog parameters for itself, but also to generate C code to facilitate drivers development. This converter is not a compiler-like program that takes Verilog input and generates C header files. It is still a human-coded program that retrieves the parameters values from the Verilog code and helps developer by using familiar content-assist functionality of the IDE, detects and flags misspelled parameter names in PyDev (Eclipse IDE plugin for Python), re-generates output when the Verilog source is modified.

    We also used Python to generate Verilog code for AHCI implementation, it seemed more convenient than native Verilog generate. Wrapping Verilog in Python and generating clean (for human analysis) Verilog code that can be used in wave viewer and in implementation tools timing analysis. It will be quite natural to make the Python programs understand more of Verilog code and help us manage the structure, generate matching constraints files that FPGA implementation tools require in addition to the HDL code. FPGA professionals probably use TCL scripts for that, it may be a nice language but I never used it outside of the FPGA scripting, so it is always a problem for me to recall how to use it when coming back to FPGA coding after long interruptions.

    I did look at MyHDL of course, but it is not exactly what I need. MyHDL tries to replace Verilog completely and the structural modeling part of it suffers from the focus on RTL. I just want Python to help me with Verilog code, not to replace it (similar to how I do not think that Verilog is the best language to simulate CPU activities). I love Cocotb more – even its gentle name (COroutine based COsimulation) tells me that it is not “instead of” but “in addition to”. Cocotb does not have a ready solution for this project either (it was never a goal of this program) so here is an interesting project to implement.

    There are several specific cases that I would like to be handled by the implementation.

    • add new functionally horizontal connections in a clean way between hierarchical objects: add outputs all the way up to the common parent module, wires at the top, and then inputs all the way down to the destination. Of course it is usually better to avoid such extra connections, but their traces in module ports help to keep them under control. Such connections may be just temporary and later removed, or be a start of adding new functionality to the involved modules.
    • generate a low footprint debug network to selected hierarchical modules and generate target Python code to probe/modify registers through this network accessing data by the HDL hierarchical names.
    • control the destiny of the decorators – either keep them as separate pieces of code or merge with the original source and make the result HDL code a new “co-designed” source.

    And this is what I plan to start with (in parallel to adding new Verilog code). Try to combine existing pieces of the solution and make it a complete one.

    by Andrey Filippov at September 19, 2016 07:41 PM

    September 13, 2016

    Elphel

    Reaching 220 MB/s sustained write speed with SATA-2 controller

    Introduction

    Elphel cameras use camogm, a user space application, for recording acquired images to a disk storage. The application is developed to use such storage devices as disk drives or USB drives mounted in the operating system. The Elphel393 model cameras have SATA-2 controller implemented in FPGA, a system driver for this controller, and they can be equipped with an SSD drive. We were interested in performing write speed tests using the SATA controller and a couple of M.2 SSDs to find out the top disk bandwidth camogm can use during image recording. Our initial approach was to try a commonly accepted method of using hdparm and dd system utilities. The first disk was SanDisk SD8SMAT128G1122. According to the manufacturer specification [pdf], this is a low power disk for embedded applications and this disk can show 182 MB/s sequential write speed in SATA-3 mode. We had the following:

    ~# hdparm -t /dev/sda2
    /dev/sda2:
    Timing buffered disk reads: 274 MB in  3.02 seconds =  90.70 MB/sec
    
    ~# time sh -c "dd if=/dev/zero of=/dev/sda2 bs=500M count=1 &amp;&amp; sync"
    1+0 records in
    1+0 records out
    
    real	0m6.096s
    user	0m0.000s
    sys	0m5.860s

    which results in total write speed around 82 MB/s.

    The second disk was Crusial CT250MX200SSD6 [pdf] and its sequential write speed should be 500 MB/s in SATA-3 mode. We had the following:

    ~# hdparm -t /dev/sda2
    /dev/sda2:
    Timing buffered disk reads: 236 MB in  3.01 seconds =  78.32 MB/sec
    
    ~# time sh -c "dd if=/dev/zero of=/dev/sda2 bs=500M count=1 &amp;&amp; sync"
    1+0 records in
    1+0 records out
    
    real	0m6.376s
    user	0m0.010s
    sys	0m5.040s

    which results in total write speed around 78 MB/s. Our preliminary tests had shown that the controller can achieve 200 MB/s write speed. Taking this into consideration, the performance figures obtained were not very promising, so we decided to add one new feature in the latest version of camogm – the ability to write data to a raw storage device. Raw storage device is a disk or a disk partition with direct access to hardware bypassing any operating system caches and buffers. Such type of access can potentially improve I/O performance but requires additional efforts to implement data management in software.

    First approach

    We tried to bypass file system in the first attempt and used device file (/dev/sda in our case) in camogm for I/O operations. We compared CPU load and I/O wait time during write operation to a partition with ext4 file system and to a device file. dstat turned to be a very handy tool for generating system resource statistics. The statistics were collected during 3 periods of operation: in idle mode before writing, during writing, and in idle mode after writing. All these periods can be clearly seen on the figures below. We also changed the quality parameter which affects the resulting size of JPEG files. Files with quality parameter set to 80 were around 1 MB in size and files with quality parameter set to 90 were almost 2 MB in size.

    sys-q80
    sys-q90

    As expected, the figures show that device file write operation takes less CPU time than the same operation with file system, because there no file system operations and caches involved.

    wai-q80
    wai-q90

    CPU wait for disk IO on the figures means the amount of time in percent the CPU waits for an I/O operation to complete. Here camogm process spends more CPU time waiting for data to be written during device file operations than during file system operations, and again this could be explained by the fact that caching on the file system level in not used.

    We also measured the time camogm spent on writing each individual file to device file and to files on ext4 file system.

    write-q80
    write-q90

    The clear patterns on the figures correspond to several sensor channels used during recording and each channel produced JPEG files different in size from the other channels. As we have already seen, file system caching has its influence on the results and the difference in overall write time becomes less obvious when the size of files increases.

    Although the tests had shown that writing data to file system and to device file had different overall performance, we could not achieve any significant performance gain which would narrow the gap between initial results and preliminary write speed data. We decided to try another approach: only pass commands to disk driver and write data from disk driver.

    Second approach

    The idea behind this approach was simple. We already have JPEG data in circular buffer in memory and disk driver only needs pointers to the data we want to write at any given moment in time. camogm was modified to pass those pointers and some meta information to driver via its sysfs interface. We modified our AHCI driver as well to add new functions. The driver accepts a command from camogm, aligns data buffers to a predefined boundary and a frame in total to a physical sector boundary, and places the command to command queue. Commands are picked from the command queue right after current disk transaction is complete. We measured the time spent by driver preparing a new command, waiting for an interrupt after a command had been issued, and waiting for a new command to arrive. Total data size per each transaction was around 9.5 MB in case of SD8SMAT128G1122 and around 3 MB in case of CT250MX200SSD6. The disks were installed in cameras with 14 Mpx and 5 Mpx sensors respectively.

    write-sd
    write-ct

    These figures show that the time spent in the driver on command preparation is almost negligible in comparison to the time spent waiting for the write command to complete and this was exactly what we finally wanted to get. We could achieve almost 160 MB/s write speed for SD8SMAT128G1122 and around 220 MB/s for CT250MX200SSD6. Here is a summary of results obtained in different modes of writing for two test disks:

    Disk write performance
    Disk File system access Device file access Raw driver access
    SD8SMAT128G1122 82 MB/s 90 MB/s 160 MB/s
    CT250MX200SSD6 78 MB/s 220 MB/s

    CT250MX200SSD6 was not tested in device file access mode as it was clear that this method did not fit our needs.

    Disk access sharing

    One of the problems we had to solve while working on the driver was disk access sharing from operating system and from driver during recording. The disk in camera had two partitions, one was formatted to ext4 file system and mounted in operating system and the other was used as a data buffer for camogm. It is possible that some user space application could access mounted partition when camogm is writing data to disk data buffer and this situation should be correctly processed. camogm as a top priority process should always have the full disk bandwidth and other system processes should be granted access only during periods of time when camogm is waiting for the next frame. libata has built-in command deferral mechanism and we used this mechanism in the driver to decide whether the system process should have access to disk or the command should be deferred. To use this mechanism, we added our function to ATA port operations structure:

    static struct ata_port_operations ahci_elphel_ops = {
        ...
        .qc_defer       = elphel_qc_defer,
    };

    This function is called every time a new system command arrives and the driver can defer the command in case it is busy writing data.

    by Mikhail Karpenko at September 13, 2016 10:51 PM

    Free Electrons

    Yocto project and OpenEmbedded training updated to Krogoth

    yocto

    Continuing our efforts to keep our training materials up-to-date we just refreshed our Yocto project and OpenEmbedded training course to the latest Yocto project release, Krogoth (2.1.1). In addition to adapting our training labs to the Krogoth release, we improved our training materials to cover more aspects and new features.

    The most important changes are:

    • New chapter about devtool, the new utility from the Yocto project to improve the developers’ workflow to integrate a package into the build system or to make patches to existing packages.
    • Improve the distro layers slides to add configuration samples and give advice on how to use these layers.
    • Add a part about quilt to easily patch already supported packages.
    • Explain in depth how file inclusions are handled by BitBake.
    • Improve the description about tasks by adding slides on how to write them in Python.

    The updated training materials are available on our training page: agenda (PDF), slides (PDF) and labs (PDF).

    Join our Yocto specialist Alexandre Belloni for the first public session of this improved training in Lyon (France) on October 19-21. We are also available to deliver this training worldwide at your site, contact us!

    by Antoine Ténart at September 13, 2016 12:24 PM

    September 12, 2016

    Free Electrons

    Free Electrons at the Kernel Recipes conference

    Kernel RecipesThe 2016 edition of the Kernel Recipes conference will take place from September 28th to 30th in Paris. With talks from kernel developers Jonathan Corbet, Greg Kroah-Hartmann, Daniel Vetter, Laurent Pinchart, Tejun Heo, Steven Rosdedt, Kevin Hilman, Hans Verkuil and many others, the schedule looks definitely very appealing, and indeed the event is now full.

    Thomas Petazzoni, Free Electrons CTO, will be attending this event. If you’re interested in discussing business or career opportunities with Free Electrons, this event will be a great place to meet together.

    by Thomas Petazzoni at September 12, 2016 12:04 PM

    September 09, 2016

    Elphel

    A web interface for a simpler and more flexible Linux kernel dynamic debug controlling

    Along with the documentation there is a number of articles explaining the dynamic debug (dyndbg) feature of the Linux kernel like this one or this. Though we haven’t found anything that would extend the basic functionality – so, we created a web interface using JavaScript and PHP on top of the dyndbg.

    debugfs-webgui

    Fig.1 debugfs-webgui

    In most cases it all works fine – when writing a linux driver you:
    1. insert pr_debug()/dev_dbg() for debug messaging.
    2. compile kernel with dyndbg enabled (CONFIG_DYNAMIC_DEBUG=y)
    3. then just ‘echo‘ query strings or ‘cat‘ files with commands to switch on/off the debug messages at runtime. Examples:

    • single:

    echo -c 'file svcsock.c line 1603 +pfmt' > /dynamic_debug/control

    • batch file:

    cat query-batch-file > /dynamic_debug/control

    When it’s all small – enabling/disabling the whole file or a function is not a problem. When the driver grows big with lots of debug messages or there are a few drivers interact with each other it becomes more convenient to have multiple configurations with certain debug lines on or off. As the source code changes the lines get shifted – and so, the batch files require editing.

    If the target system (embedded or not) has network and a web browser (Apache2 + PHP) a quite simple solution is to add a web interface to the dynamic debug. The one we have developed has the following features:

    • allows having multiple configurations for each file
    • displays only files of interest
    • updates debug configuration for modified files where debug lines got shifted
    • keeps/updates the current config (in json format) in tmpfs – saves to disk on button click
    • p, f, l, m, t flags are supported

    Get the source code then proceed with the README.md.

    by Oleg Dzhimiev at September 09, 2016 12:40 AM

    September 01, 2016

    Free Electrons

    Free Electrons at the X Developer Conference

    The next X.org Developer Conference will take place on September 21 to September 23 in Helsinki, Finland. This is a major event for Linux developers working in the graphics/display areas, not only at the X.org level, but also at the kernel level, in Mesa, and other related projects.

    Free Electrons engineer Maxime Ripard will be attending this conference, with 80+ other engineers from Intel, Google, NVidia, Texas Instruments, AMD, RedHat, etc.

    Maxime is the author of the DRM/KMS driver in the upstream Linux kernel for the Allwinner SoCs, which provides display support for numerous Allwinner platforms, especially Nextthing’s CHIP (with parallel LCD support, HDMI support, VGA support and composite video support). Maxime has also worked on making the 3D acceleration work on this platform with a mainline kernel, by adapting the Mali kernel driver. Most recently, Maxime has been involved in Video4Linux development, writing a driver for the camera interface of Allwinner SoCs, and supervising Florent Revest work on the Allwinner VPU that we published a few days ago.

    by Thomas Petazzoni at September 01, 2016 02:58 PM

    August 31, 2016

    Free Electrons

    Free Electrons mentioned in Linux Foundation’s report

    Linux Kernel Development Report 2016Lask week, the Linux Foundation announced the publication of the 2016 edition of its usual report “Linux Kernel Development – How Fast It is Going, Who is Doing It, What They are Doing, and Who is Sponsoring It”.

    This report gives a nice overview of the evolution of the Linux kernel since 3.18, especially from a contribution point of view: the rate of changes, who is contributing, are there new developers joining, etc.

    Free Electrons is mentioned in several places in this report. First of all, even though Free Electrons is a consulting company, it is shown individually rather than part of the general “consultants” category. As the report explains:

    The category “consultants” represents developers who contribute to the kernel as a work-for-hire effort from different companies. Some consultant companies, such as Free Electrons and Pengutronix, are shown individually as their contributions are a significant number.

    Thanks to being mentioned separately from the “consultants” category, the report also shows that:

    • Free Electrons is the #15 contributing company over the 3.19 to 4.7 development period, in number of commits. Free Electrons contributed a total of 1453 commits, corresponding to 1.3% of the total commits
    • Free Electrons is ranked #13 in the list of companies by number of Signed-off-by from developers who are not the author of patches. This happens because 6 of our engineers are maintainers or co-maintainers from various areas in the kernel: they merge patches from contributors, sign-off on them, and send them to another maintainer (either arm-soc maintainers or directly Linus Torvalds, depending on the subsystem).

    We’re glad to see Free Electrons mentioned in this report, which shows that we are a strong contributor to the official Linux kernel. Thanks to this contribution effort, we have tremendous experience with adding support for new hardware in the kernel, so contact us if you want your hardware supported in the official Linux kernel!

    by Thomas Petazzoni at August 31, 2016 09:08 AM

    August 30, 2016

    Free Electrons

    Support for the Allwinner VPU in the mainline Linux kernel

    Over the last few years, and most recently with the support for the C.H.I.P platform, Free Electrons has been heavily involved in initiating and improving the support in the mainline Linux kernel for the Allwinner ARM processors. As of today, a large number of hardware features of the Allwinner processors, especially the older ones such as the A10 or the A13 used in the CHIP, are usable with the mainline Linux kernel, including complex functionality such as display support and 3D acceleration. However, one feature that was still lacking is proper support for the Video Processing Unit (VPU) that allows to accelerate in hardware the decoding and encoding of popular video formats.

    During the past two months, Florent Revest, a 19 year old intern at Free Electrons worked on a mainline solution for this Video Processing Unit. His work followed the reverse engineering effort of the Cedrus project, and this topic was also listed as a High Priority Reverse Engineering Project by the FSF.

    The internship resulted in a new sunxi-cedrus driver, a Video4Linux memory-to-memory decoder kernel driver and a corresponding VA-API backend, which allows numerous userspace applications to use the decoding capabilities. Both projects have both been published on Github:

    Currently, the combination of the kernel driver and VA-API backend supports MPEG2 and MPEG4 decoding only. There is for the moment no support for encoding, and no support for H264, though we believe support for both aspects can be added within the architecture of the existing driver and VA-API backend.

    A first RFC patchset of the kernel driver has been sent to the linux-media mailing list, and a complete documentation providing installation information and architecture details has been written on the linux-sunxi’s wiki.

    Here is a video of VLC playing a MPEG2 demo video on top of this stack on the Next Thing’s C.H.I.P:

    by Thomas Petazzoni at August 30, 2016 02:13 PM

    August 18, 2016

    Bunnie Studios

    Name that Ware August 2016

    The Ware for August 2016 is shown below.

    Thanks to Adrian Tschira (notafile) for sharing this well-photographed ware! The make and model of this ware is unknown to both of us, so if an unequivocal identification isn’t made over the coming month, I’ll be searching the comments for either the most thoughtful or the most entertaining analysis of the ware.

    by bunnie at August 18, 2016 04:48 PM

    Winner, Name that Ware July 2016

    The Ware for July 2016 was a board from a Connection Machine CM-2 variant; quite likely a CM-200.

    It’s an absolutely gorgeous board, and the sort of thing I’d use as a desktop background if I used a desktop background that was’t all black. Thanks again to Mark Jessop for contributing the ware. Finally, the prize this month goes to ojn for a fine bit of sleuthing, please email me to claim your prize! I particularly loved this little comment in the analysis:

    The board layout technique is different from what I’ve been able to spot from IBM, SGI, DEC. Cray used different backplanes so the connectors at the top also don’t match.

    Every designer and design methodology leaves a unique fingerprint on the final product. While I can’t recognize human faces very well, I do perceive stylistic differences in a circuit board. The brain works in funny ways…

    by bunnie at August 18, 2016 04:48 PM

    August 16, 2016

    Harald Welte

    (East) European motorbike tour on 20y old BMW F650ST

    For many years I've always been wanting to do some motorbike riding across the Alps, but somehow never managed to do so. It seems when in Germany I've always been too busy - contrary to the many motorbike tours around and across Taiwan which I did during my frequent holidays there.

    This year I finally took the opportunity to combine visiting some friends in Hungary and Bavaria with a nice tour starting from Berlin over Prague and Brno (CZ), Bratislava (SK) to Tata and Budapeest (HU), further along lake Balaton (HU) towards Maribor (SI) and finally across the Grossglockner High Alpine Road (AT) to Salzburg and Bavaria before heading back to Berlin.

    /images/f650st-grossglockner-hochalpenstrasse.jpg

    It was eight fun (but sometimes long) days riding. For some strange turn of luck, not a single drop of rain was encountered during all that time, traveling across six countries.

    The most interesting parts of the tour were:

    • Along the Elbe river from Pirna (DE) to Lovosice (CZ). Beautiful scenery along the river valley, most parts of the road immediately on either side of the river. Quite touristy on the German side, much more pleasant and quiet on the Czech side.
    • From Mosonmagyarovar via Gyor to Tata (all HU). Very little traffic alongside road '1'. Beautiful scenery with lots of agriculture and forests left and right.
    • The Northern coast of Lake Balaton, particularly from Tinany to Keszthely (HU). Way too many tourists and traffic for my taste, but still very impressive to realize how large/long that lake really is.
    • From Maribor to Dravograd (SI) alongside the Drau/Drav river valley.
    • Finally, of course, the Grossglockner High Alpine Road, which reminded me in many ways of the high mountain tours I did in Taiwan. Not a big surprise, given that both lead you up to about 2500 meters above sea level.

    Finally, I have to say I've been very happy with the performance of my 1996 model BMW F 650ST bike, who has coincidentally just celebrated its 20ieth anniversary. I know it's an odd bike design (650cc single-cylinder with two spark plugs, ignition coils and two carburetors) but consider it an acquired taste ;)

    I've also published a map with a track log of the trip

    In one month from now, I should be reporting from motorbike tours in Taiwan on the equally trusted small Yamaha TW-225 - which of course plays in a totally different league ;)

    by Harald Welte at August 16, 2016 02:00 PM

    August 03, 2016

    Free Electrons

    Linux 4.7 statistics: Free Electrons engineer #2 contributor

    LWN.net has published yesterday an article containing statistics for the 4.7 development cycle. This article is available for LWN.net subscribers only during the coming week, and will then be available for everyone, free of charge.

    It turns out that Boris Brezillon, Free Electrons engineer, is the second most active contributor to the 4.7 kernel in number of commits! The top three contributors in number of commits are: H Hartley Sweeten (208 commits), Boris Brezillon (132 commits) and Al Viro (127 commits).

    LWN.net 4.7 kernel statistics

    In addition to being present in the most active developers by number of commits, Boris Brezillon is also in the #11 most active contributor in terms of changed lines. As we discussed in our previous blog post, most contributions from Boris were targeted at the PWM subsystem on one side (atomic update support) and the NAND subsystem on the other side.

    Another Free Electrons engineer shows up in the per-developer statistics: Maxime Ripard is the #17 most active contributor by lines changed. Indeed, Maxime contributed a brand new DRM/KMS driver for the Allwinner display controller.

    As a company, Free Electrons is ranked for the 4.7 kernel as the #12 most active company by number of commits, and #10 by number of changed lines. We are glad to continue being such a contributor to the Linux kernel development, as we have been for the last four years. If you want your hardware to be supported in the official Linux kernel, contact us!

    by Thomas Petazzoni at August 03, 2016 07:41 AM

    August 02, 2016

    Free Electrons

    “Understanding D-Bus” talk at the Toulouse Embedded Linux Meetup

    A few months ago, in May, Free Electrons engineer Mylène Josserand presented a talk titled Understanding D-Bus at the Toulouse Embedded Linux and Android meetup.

    In this talk, Mylène shared her experience working with D-Bus, especially in conjunction with the OFono and Connman projects, to support modem and 3G connections on embedded Linux systems.

    Understanding D-Bus

    We are now publishing the slides of Mylène’s talk, they are available in PDF format.

    by Thomas Petazzoni at August 02, 2016 09:14 AM

    August 01, 2016

    Free Electrons

    Linux 4.7 released, Free Electrons contributions

    Adelie PenguinLinux 4.7 has been released on Sunday by Linus Torvalds, with numerous new features and improvements that have been described in details on LWN: part 1, part 2 and part 3. KernelNewbies also has an updated page on the 4.7 release. We contributed a total of 222 patches to this release.

    Our most significant contributions:

    • Boris Brezillon has contributed a core improvement to the PWM subsystem: a mechanism that allows to update the properties of a PWM in an atomic fashion. This is needed when a PWM has been initialized by the bootloader, and the kernel needs to take over without changing the properties of the PWM. See the main patch for more details. What prompted the creation of this patch series is a problem on Rockchip based Chromebook platforms where a PWM is used for a regulator, and the PWM properties need to be preserved across the bootloader to kernel transition. In addition to the changes of the core infrastructure, Boris contributed numerous patches to fix existing PWM users.
    • In the MTD subsystem, Boris Brezillon continued his cleanup efforts
      • Use the common Device Tree parsing code provided by nand_scan_ident() in more drivers, rather than driver-specific code.
      • Move drivers to expose their ECC/OOB layout information using the mtd_ooblayout_ops structure, and use the corresponding helper functions where appropriate. This change will allow a more flexible description of the ECC and OOB layout.
      • Document the Device Tree binding that should now be used for all NAND controllers / NAND chip, with a clear separation between the NAND controller and the NAND chip. See this commit for more details.
    • In the RTC subsystem, Mylène Josserand contributed numerous improvements to the rv3029 and m41t80 drivers, including the addition of the support for the RV3049 (the SPI variant of RV3029). See also our previous blog post on the support of Microcrystal’s RTCs/.
    • On the support of Atmel platforms
      • Boris Brezillon contributed a number of fixes and improvements to the atmel-hlcdc driver, the DRM/KMS driver for Atmel platforms
    • On the support of Allwinner platforms
      • Maxime Ripard contributed a brand new DRM/KMS driver to support the display controller found on several Allwinner platforms, with a specific focus on Allwinner A10. This new driver allows to have proper graphics support in the Nextthing Co. C.H.I.P platform, including composite output and RGB output for LCD panels. To this effect, in addition to the driver itself, numerous clock patches and Device Tree patches were made.
      • Boris Brezillon contributed a large number of improvements to the NAND controller driver used on Allwinner platforms, including performance improvements.
      • Quentin Schulz made his first kernel contribution by sending a patch fixing the error handling in a PHY USB driver used by Allwinner platforms.
    • On the support of Marvell platforms
      • Grégory Clement made some contributions to the mv_xor driver to make it 64-bits ready, as the same XOR engine is used on Armada 3700, a Cortex-A53 based SoC. Grégory then enabled the use of the XOR engines on this platform by updating the corresponding Device Tree.
      • Romain Perier did some minor updates related to the Marvell cryptographic engine support. Many more updates will be present in the upcoming 4.8, including significant performance improvements.
      • Thomas Petazzoni contributed some various fixes (cryptographic engine usage on some Armada 38x boards, HW I/O coherency related fixes).
      • Thomas also improved the support for Armada 7K and 8K, with the description of more hardware blocks, and updates to drivers.

    Here are in details, the different contributions we made to this release:

    by Thomas Petazzoni at August 01, 2016 03:55 PM

    July 23, 2016

    Harald Welte

    python-inema: Python module implementing Deutsche Post 1C4A Internetmarke API

    At sysmocom we maintain a webshop with various smaller items and accessories interesting to the Osmocom community as well as the wider community of people experimenting (aka 'playing') with cellular communications infrastructure. As this is primarily a service to the community and not our main business, I'm always interested in ways to reduce the amount of time our team has to use in order to operate the webshop.

    In order to make the shipping process more efficient, I discovered that Deutsche Post is offering a Web API based on SOAP+WSDL which can be used to generate franking for the (registered) letters that we ship around the world with our products.

    The most interesting part of this is that you can generate combined address + franking labels. As address labels need to be printed anyway, there is little impact on the shipping process beyond having to use this API to generate the right franking for the particular shipment.

    Given the general usefulness of such an online franking process, I would have assumed that virtually anyone operating some kind of shop that regularly mails letters/products would use it and hence at least one of those users would have already written some free / open source software code fro it. To my big surprise, I could not find any FOSS implementation of this API.

    If you know me, I'm the last person to know anything about web technology beyond HTML 4 which was the latest upcoming new thing when I last did anything web related ;)

    Nevertheless, using the python-zeep module, it was fairly easy to interface the web service. The weirdest part is the custom signature algorithm that they use to generate some custom soap headers. I'm sure they have their reasons ;)

    Today I hence present the python-inema project, a python module for accessing this Internetmarke API.

    Please note while I'm fluent in Pascal, Perl, C and Erlang, programming in Python doesn't yet come natural to me. So if you have any comments/feedback/improvements, they're most welcome by e-mail, including any patches.

    by Harald Welte at July 23, 2016 02:00 PM

    Going to attend Electromagnetic Field 2016

    Based on some encouragement from friends as well as my desire to find more time again to hang out at community events, I decided to attend Electromagnetic Field 2016 held in Guildford, UK from August 5th through 7th.

    As I typically don't like just attending an event without contributing to it in some form, I submitted a couple of talks / workshops, all of which were accepted:

    • An overview talk about the Osmocom project
    • A Workshop on running your own cellular network using OpenBSC and related Osmocom software
    • A Workshop on tracing (U)SIM card communication using Osmocom SIMtrace

    I believe the detailed schedule is still in the works, as I haven't yet been able to find any on the event website.

    Looking forward to having a great time at EMF 2016. After attending Dutch and German hacker camps for almost 20 years, let's see how the Brits go about it!

    by Harald Welte at July 23, 2016 02:00 PM

    EC-GSM-IoT: Enhanced Coverage GSM for IoT

    In private conversation, Holger mentioned EC-GSM-IoT to me, and I had to dig a bit into it. It was introduced in Release 13, but if you do a web search for it, you find surprisingly little information beyond press releases with absolutely zero information content and no "further reading".

    The primary reason for this seems to be that the feature was called EC-EGPRS until the very late stages, when it was renamed for - believe it or not - marketing reasons.

    So when searching for the right term, you actually find specification references and change requests in the 3GPP document archives.

    I tried to get a very brief overview, and from what I could find, it is centered around GERAN extension in the following ways:

    • EC-EGPRS goal: Improve coverage by 20dB
      • New single-burst coding schemes
      • Blind Physical Layer Repetitions where bursts are repeated up to 28 times without feedback from remote end
        • transmitter maintains phase coherency
        • receiver uses processing gain (like incremental redundancy?)
      • New logical channel types (EC-BCCH, EC-PCH, EC-AGC, EC-RACH, ...)
      • New RLC/MAC layer messages for the EC-PDCH communication
    • Power Efficient Operation (PEO)
      • Introduction of eDRX (extended DRX) to allow for PCH listening intervals from minutes up to a hour
      • Relaxed Idle Mode: Important to camp on a cell, not best cell. Reduces neighbor cell monitoring requirements

    In terms of required modifications to an existing GSM/EDGE implementation, there will be (at least):

    • changes to the PHY layer regarding new coding schemes, logical channels and burst scheduling / re-transmissions
    • changes to the RLC/MAC layer in the PCU to implement the new EC specific message types and procedures
    • changes to the BTS and BSC in terms of paging in eDRX

    In case you're interested in more pointers on technical details, check out the links provided at https://osmocom.org/issues/1780

    It remains to be seen how widely this will be adopted. Rolling this cange out on moderm base station hardware seems technicalyl simple - but it remains to be seen how many equipment makers implement it, and at what cost to the operators. But I think the key issue is whether or not the baseband chipset makers (Intel, Qualcomm, Mediatek, ...) will implement it anytime soon on the device side.

    There are no plans on implementing any of this in the Osmocom stack as of now,but in case anyone was interested in working on this, feel free to contact us on the osmocom-net-gprs@lists.osmocom.org mailing list.

    by Harald Welte at July 23, 2016 10:00 AM

    July 21, 2016

    Bunnie Studios

    Why I’m Suing the US Government

    Today I filed a lawsuit against the US government, challenging Section 1201 of the Digital Millennium Copyright Act. Section 1201 means that you can be sued or prosecuted for accessing, speaking about, and tinkering with digital media and technologies that you have paid for. This violates our First Amendment rights, and I am asking the court to order the federal government to stop enforcing Section 1201.

    Before Section 1201, the ownership of ideas was tempered by constitutional protections. Under this law, we had the right to tinker with gadgets that we bought, we had the right to record TV shows on our VCRs, and we had the right to remix songs. Section 1201 built an extra barrier around copyrightable works, restricting our prior ability to explore and create. In order to repair a gadget, we may have to decrypt its firmware; in order to remix a video, we may have to strip HDCP. Whereas we once readily expressed feelings and new ideas through remixes and hardware modifications, now we must first pause and ask: does this violate Section 1201? Especially now that cryptography pervades every aspect of modern life, every creative spark is likewise dampened by the chill of Section 1201.

    The act of creation is no longer spontaneous.

    Our recent generation of Makers, hackers, and entrepreneurs have developed under the shadow of Section 1201. Like the parable of the frog in the well, their creativity has been confined to a small patch, not realizing how big and blue the sky could be if they could step outside that well. Nascent 1201-free ecosystems outside the US are leading indicators of how far behind the next generation of Americans will be if we keep with the status quo.

    Our children deserve better.

    I can no longer stand by as a passive witness to this situation. I was born into a 1201-free world, and our future generations deserve that same freedom of thought and expression. I am but one instrument in a large orchestra performing the symphony for freedom, but I hope my small part can remind us that once upon a time, there was a world free of such artificial barriers, and that creativity and expression go hand in hand with the ability to share without fear.

    If you want to read more about the lawsuit, please check out the EFF’s press release on the matter.

    by bunnie at July 21, 2016 01:01 PM

    Countering Lawful Abuses of Digital Surveillance

    Completely separate from the Section 1201 lawsuit against the Department of Justice, I’m working with the FPF on a project to counter lawful abuses of digital surveillance. Here’s the abstract:

    Front-line journalists are high-value targets, and their enemies will spare no expense to silence them. Unfortunately, journalists can be betrayed by their own tools. Their smartphones are also the perfect tracking device. Because of the precedent set by the US’s “third-party doctrine,” which holds that metadata on such signals enjoys no meaningful legal protection, governments and powerful political institutions are gaining access to comprehensive records of phone emissions unwittingly broadcast by device owners. This leaves journalists, activists, and rights workers in a position of vulnerability. This work aims to give journalists the tools to know when their smart phones are tracking or disclosing their location when the devices are supposed to be in airplane mode. We propose to accomplish this via direct introspection of signals controlling the phone’s radio hardware. The introspection engine will be an open source, user-inspectable and field-verifiable module attached to an existing smart phone that makes no assumptions about the trustability of the phone’s operating system.

    You can find out more about the project by reading the white paper at Pubpub.

    by bunnie at July 21, 2016 01:00 PM

    July 16, 2016

    Harald Welte

    Deeper ventures into Ericsson (Packet) Abis

    Some topics keep coming back, even a number of years after first having worked on them. And then you start to search online using your favorite search engine - and find your old posts on that subject are the most comprehensive publicly available information on the subject ;)

    Back in 2011, I was working on some very basic support for Ericsson RBS2xxx GSM BTSs in OpenBSC. The major part of this was to find out the weird dynamic detection of the signalling timeslot, as well as the fully non-standard OM2000 protocol for OML. Once it reached the state of a 'proof-of-concept', work at this ceased and remained in a state where still lots of manual steps were involved in BTS bring-up.

    I've recently picked this topic up again, resulting in some work-in-progress code in http://git.osmocom.org/openbsc/log/?h=laforge/om2000-fsm

    Beyond classic E1 based A-bis support, I've also been looking (again) at Ericsson Packet Abis. Packet Abis is their understanding of Abis over IP. However, it is - again - much further from the 3GPP specifications than what we're used to in the Osmocom universe. Abis/IP as we know consists of:

    • RSL and OML over TCP (inside an IPA multiplex)
    • RTP streams for the user plane (voice)
    • Gb over IP (NS over UDP/IP), as te PCU is in the BTS.

    In the Ericsson world, they decided to taka a much lower-layer approach and decided to

    • start with L2TP over IP (not the L2TP over UDP that many people know from VPNs)
    • use the IETF-standardized Pseudowire type for HDLC but use a frame format in violation of the IETF RFCs
    • Talk LAPD over L2TP for RSL and OML
    • Invent a new frame format for voice codec frames called TFP and feed that over L2TP
    • Invent a new frame format for the PCU-CCU communication called P-GSL and feed that over L2TP

    I'm not yet sure if we want to fully support that protocol stack from OpenBSC and related projects, but in any case I've extende wireshark to decode such protocol traces properly by

    • Extending the L2TP dissector with Ericsson specific AVPs
    • Improving my earlier pakcet-ehdlc.c with better understanding of the protocol
    • Implementing a new TFP dissector from scratch
    • Implementing a new P-GSL dissector from scratch

    The resulting work can be found at http://git.osmocom.org/wireshark/log/?h=laforge/ericsson-packet-abis in case anyone is interested. I've mostly been working with protocol traces from RBS2409 so far, and they are decoded quite nicely for RSL, OML, Voice and Packet data. As far as I know, the format of the STN / SIU of other BTS models is identical.

    Is anyone out there in possession of Ericsson RBS2xxx RBSs interested in collboration on either a Packet Abis implementation, or an inteface of the E1 or packet based CCU-PCU interface to OsmoPCU?

    by Harald Welte at July 16, 2016 10:00 AM

    July 12, 2016

    Bunnie Studios

    Name that Ware July 2016

    The ware for July 2016 is shown below.

    Thanks to Mark Jessop for contributing this wonderful ware. It’s a real work of art on the front side, but google makes it way too easy to identify with a couple part number queries. To make it a smidgen more challenging, I decided to start this month’s competition with just the back side of the board. If the photo above doesn’t give enough clues, I’ll add a photo of the front side as well…

    by bunnie at July 12, 2016 09:01 AM

    Winner, Name that Ware June 2016

    The Ware for June 2016 is an ATS810C by ATS Automation. There’s no information on their website about this particular board, but there’s lots of good ideas in the comments as to what this could be from. However, none of them have me 100% convinced. So I’ll just go with the first answer that generally identified the ware as a stepper motor controller with RS-232 interface by notafile. Congrats, email me for your prize!

    by bunnie at July 12, 2016 09:00 AM

    Elphel

    I will not have to learn SystemVerilog

    Or at least larger (verification) part of it – interfaces, packages and a few other synthesizable features are very useful to reduce size of Verilog code and make it easier to maintain. We now are able to run production target system Python code with Cocotb simulation over BSD sockets.

    Client-server simulation of NC393 with Cocotb

    Client-server simulation of NC393 with Cocotb


    Previous workflow

    Before switching to Cocotb our FPGA-related workflow involved:

    1. Creating RTL design code
    2. Writing Verilog tests
    3. Running simulations
    4. Synthesizing and creating bitfile
    5. Re-writing test code to run on the target system in Python
    6. Developing kernel drivers to support the FPGA functionality
    7. Developing applications that access FPGA functionality through the kernel drivers

    Of course the steps are not that linear, there are hundreds of loops between steps 1 and 3 (editing RTL source after finding errors at step 3), almost as many from 5 to 1 (when the problems reveal themselves during hardware testing) but few are noticed only at step 6 or 7. Steps 2, 5, 6+7 involve a gross violation of DRY principle, especially the first two. The last steps sufficiently differ from step 5 as their purpose is different – while Python tests are made to reveal the potential problems including infrequent conditions, drivers only use a subset of functionality and try to “hide” problems – perform recovering actions to maintain operation of the device after abnormal condition occurs.

    We already tried to mitigate these problems – significant part of the design flexibility is achieved through parametrized modules. Parameters are used to define register map and register bit fields – they are one of the most frequently modified when new functionality is added. Python code in the camera is able to read and process Verilog parameters include files when running on the target system, and while generating C header files for the kernel drivers, so here DRY principle stands. Changes in any parameters definitions in Verilog files will be automatically propagated to both Python and C code.

    But it is definitely not enough. Steps 2 and 5 may involve tens of thousands lines of code and large part of the Python code is virtually a literal translation from the Verilog original. All our FPGA-based systems (and likely it is true for most other applications) involve symbiotic operation of the FPGA and some general purpose processor. In Xilinx Zynq they are on the same chip, in our earlier designs they were connected on the PCB. Most of the volume of the Verilog test code is the simulation of the CPU running some code. This code interacts with the rest of the design through the writes/reads of the memory-mapped control/status registers as well as the system memory when FPGA is a master sending/receiving data over DMA.

    This is one of the reasons I hesitated to learn verification functionality of SystemVerilog. There are tons of computer programming languages that may be a better fit to simulate program activity of the CPU (this is what they do naturally). Currently most convenient for bringing the new hardware to life seems to be Python, so I was interested in trying Cocotb. If I new it is that easy I would probably start earlier, but having rather large volume of the existing Verilog code for testing I was postponing the switch.

    Trying Cocotb

    Two weeks ago I gave it a try. First I prepared the instruments – integrated Cocotb into VDT, made sure that Eclipse console output is clickable for the simulator reported problems, simulator output as well as for the errors in Python code and the source links in Cocotb logs. I used the Cocotb version of the JPEG encoder that has the Python code for simulation – just added configuration options for VDT and fixed the code to reduce number of warning markers that VDT generated. Here is the version that can be imported as Eclipse+VDT project.

    Converting x393 camera project to Cocotb simulation

    Next was to convert simulation of our x393 camera project to use Cocotb. For that I was looking not just to replace Verilog test code with Python, but to use the same Python program that we already have running on the target hardware for simulation. The program already had a “dry run” option for development on a host computer, that part had to be modified to access simulator. I needed some way to effectively isolate the Python code that is linked to the simulator and the code of the target system, and BSD sockets provide a good match that. One part of the program that uses Cocotb modules and is subject to special requirement to the Python coroutines to work for simulation – it plays a role of the server. The other part (linked to the target system program) replaces memory accesses, sends the request parameters over the socket connection, and waits for the response from the server. The only other than memory access commands that are currently implemented are “finish” (to complete simulation and analyze the results in wave viewer – GtkWave), “flush” (flush file writes – similar to cache flushes on a real hardware) and interruptible (by the simulated system interrupt outputs) wait for specified time. Simulation time is frozen between requests from the client, so the target system has to specifically let the simulated system run for certain time (or until it will generate an interrupt).

    Simulation client

    Below is the example of modification to the target code memory write (full source). X393_CLIENT is True: branch is for old dry run (NOP) mode, second one (elif not X393_CLIENT is None:) is for the simulation server and the last one accesses real memory over /dev/mem.

    def write_mem (self,addr, data,quiet=1):
            """
            Write 32-bit word to physical memory
            @param addr - physical byte address
            @param data - 32-bit data to write
            @param quiet - reduce output
            """
            if X393_CLIENT is True:
                print ("simulated: write_mem(0x%x,0x%x)"%(addr,data))
                return
            elif not X393_CLIENT is None:
                if quiet  1:
                    print ("remote: write_mem(0x%x,0x%x)"%(addr,data))
                X393_CLIENT.write(addr, [data])
                if quiet  1:
                    print ("remote: write_mem done" )
                return
            with open("/dev/mem", "r+b") as f:
                page_addr=addr & (~(self.PAGE_SIZE-1))
                page_offs=addr-page_addr
                mm = self.wrap_mm(f, page_addr)
                packedData=struct.pack(self.ENDIAN+"L",data)
                d=struct.unpack(self.ENDIAN+"L",packedData)[0]
                mm[page_offs:page_offs+4]=packedData
                if quiet 2:
                    print ("0x%08x == 0x%08x (%d)"%(addr,d,d))

    There is not much magic in initializing X393_CLIENT class instance:

    print("Creating X393_CLIENT")
                try:
                    X393_CLIENT= x393Client(host=dry_mode.split(":")[0], port=int(dry_mode.split(":")[1]))
                    print("Created X393_CLIENT")
                except:
                    X393_CLIENT= True
                    print("Failed to create X393_CLIENT")

    And all the sockets handling code is less than 100 lines (source):

    import json
    import socket
    
    class SocketCommand():
        command=None
        arguments=None
        def __init__(self, command=None, arguments=None): # , debug=False):
            self.command = command
            self.arguments=arguments
        def getCommand(self):
            return self.command
        def getArgs(self):
            return self.arguments
        def getStart(self):
            return self.command == "start" 
        def getStop(self):
            return self.command == "stop" 
        def getWrite(self):
            return self.arguments if self.command == "write" else None
        def getWait(self):
            return self.arguments if self.command == "wait" else None
        def getFlush(self):
            return self.command == "flush"
        def getRead(self):
            return self.arguments if self.command == "read" else None
        def setStart(self):
            self.command = "start"
        def setStop(self):
            self.command = "stop"
        def setWrite(self,arguments):
            self.command = "write"
            self.arguments=arguments
        def setWait(self,arguments): # wait irq mask, timeout (ns)
            self.command = "wait"
            self.arguments=arguments
        def setFlush(self):         #flush memory file (use when sync_for_*
            self.command = "flush"
        def setRead(self,arguments):
            self.command = "read"
            self.arguments=arguments
        def toJSON(self,val=None):
            if val is None:
                return json.dumps({"cmd":self.command,"args":self.arguments})
            else:
                return json.dumps(val)    
        def fromJSON(self,jstr):
            d=json.loads(jstr)
            try:
                self.command=d['cmd']
            except:
                self.command=None
            try:
                self.arguments=d['args']
            except:
                self.arguments=None
            
    class x393Client():
        def __init__(self, host='localhost', port=7777):
            self.PORT = port
            self.HOST = host   # Symbolic name meaning all available interfaces
            self.cmd= SocketCommand()
        def communicate(self, snd_str):
            sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
            sock.connect((self.HOST, self.PORT))
            sock.send(snd_str)
            reply = sock.recv(16384)  # limit reply to 16K
            sock.close()
            return reply
        def start(self):
            self.cmd.setStart()
            print("start->",self.communicate(self.cmd.toJSON()))
        def stop(self):
            self.cmd.setStop()
            print("stop->",self.communicate(self.cmd.toJSON()))
        def write(self, address, data):
            self.cmd.setWrite([address,data])
            rslt = self.communicate(self.cmd.toJSON())
        def waitIrq(self, irqMask,wait_ns):
            self.cmd.setWait([irqMask,wait_ns])
            rslt = self.communicate(self.cmd.toJSON())
        def flush(self):
            self.cmd.setFlush()
        def read(self, address):
            self.cmd.setRead(address)
            rslt = self.communicate(self.cmd.toJSON())
            return json.loads(rslt)

    Simulation server

    Server code is larger (it now has 360 lines) but it is rather simple too. It runs in the Cocotb environment (coroutines that yield to simulator have “@cocotb.coroutine” decorations), receives and responds to the commands over the socket. When the command involves writing, it compares requested address to the pre-defined ranges and either sends data over one of the master AXI channels (defined in x393interfaces.py) or writes to the “system memory” – just a file system file with appropriate offset corresponding to the specified address)

    elif self.cmd.getWrite():
                ad = self.cmd.getWrite()
                self.dut._log.debug('Received WRITE, 0x%0x: %s'%(ad[0],hex_list(ad[1])))
                if ad[0]in self.RESERVED:
                    if ad[0] == self.INTM_ADDRESS:
                        self.int_mask = ad[1][0]
                    rslt = 0 
                elif (ad[0] >= self.memlow) and  (ad[0]  self.memhigh):
                    addr = ad[0]
                    self._memfile.seek(addr)
                    for data in ad[1]: # currently only single word is supported
                        sdata=struct.pack(" 0x%08x"%(data,addr))
                        addr += 4
                    rslt = 0 
                elif(ad[0] >= 0x40000000) and (ad[0]  0x80000000):
                    rslt = yield self.maxigp0.axi_write(address =     ad[0],
                                                    value =           ad[1],
                                                    byte_enable =     None,
                                                    id =              self.writeID,
                                                    dsize =           2,
                                                    burst =           1,
                                                    address_latency = 0,
                                                    data_latency =    0)
                    self.dut._log.debug('maxigp0.axi_write yielded %s'%(str(rslt)))
                    self.writeID = (self.writeID+1) & self.writeIDMask
                elif (ad[0] >= 0xc0000000) and (ad[0]  0xfffffffc):
                    self.ps_sbus.write_reg(ad[0],ad[1][0])
                    rslt = 0 
                else:
                    self.dut._log.info('Write address 0x%08x is outside of maxgp0, not yet supported'%(ad[0]))
                    rslt = 0
                self.dut._log.info('WRITE 0x%08x = %s'%(ad[0],hex_list(ad[1], max_items = 4)))
                self.soc_conn.send(self.cmd.toJSON(rslt)+"\n")
                self.dut._log.debug('Sent rslt to the socket')

    Similarly read commands acquire data from either AXI read channel or from the same memory image file. Data is sent to this file over the AXI slave interface by the simulated device.

    Top Verilog module

    The remaining part of the conversion form plain Verilog to Cocotb simulation is the top Verilog file – x393_dut.v. It contains an instance of the actual synthesized module (x393_i) and Verilog simulation modules of the connected peripherals. These modules can also be replaced by Python ones (and some eventually will be), but others, like Micron DDR3 memory model, are easier to use as they are provided by the chip manufacturer.

    Python modules can access hierarchical nodes in the design, but to keep things cleaner all the design inputs and outputs are routed to/from the outputs/inputs of the x393_dut module. In the case of Xilinx Zynq that involves connecting internal nodes – Zynq considers CPU interface not as I/O, but as an empty module (PS7) instantiated in the design.

    Screenshot of the simulation with client (black console) and server (in Eclipse IDE)

    Screenshot of the simulation with client (black console) and server (in Eclipse IDE)

    Conclusions

    Conversion to Python simulation was simple, considering rather large amount of project (Python+Verilog) code – about 100K lines. After preparing the tools it took just one week and now we have the same code running both on the real hardware and in the simulator.

    Splitting the simulation into client/server duo makes it easy to use any other programming language on the client side – not just the Python of our choice. Unix sockets provide convenient means for that. Address decoder (which decides what interface to use for the received memory access request) is better to keep on the server (simulator) side of the socket connection, not on the client. This minimizes changes to the target code and the server is playing the role of the memory-mapped system bus, behaves as the real hardware does.

    Are there any performance penalties compared to all-Verilog simulation? None visible in our designs. Simulation (and Icarus Verilog is a single-threaded application) is the most time-consuming part – for our application it is about 8,000,000 times slower than the modeled hardware. Useful simulations (all-Verilog) for the camera runs for 15-40 minutes with tiny 64×32 pixel images. If we ran normal set of 14 MPix frames it would take about a week for the first images to appear at the output. Same Python code on the target runs for a fraction of a second, so even as the simulator is stopped while Python runs, combined execution time does not noticeably change for the Python+Verilog vs. all-Verilog mode. It would be nice to try to use Verilator in addition to Icarus. While it is not a real Verilog simulator (it can not handle ‘bx and ‘bz values, just ‘0’ and ‘1’) it is much faster.

    by Andrey Filippov at July 12, 2016 12:32 AM

    July 08, 2016

    Video Circuits

    Edge of Frame DIY Space For London

















    I am performing on Thursday the 14th at DIY space for London.
    Edge of Frame  very kindly asked me to play. There will also be a fantastic selection of films by other artists including some of my favourite people operating in the world of experimental electronic image making.

    by Chris (noreply@blogger.com) at July 08, 2016 02:21 AM

    July 03, 2016

    ZeptoBARS

    MAX2659 - SiGe GPS/GNSS LNA : weekend die-shot

    Maxim Integrated MAX2659 is a low-noise (NF 0.8dB) SiGe RF amplifier for GPS/GNSS applications.

    July 03, 2016 01:06 AM

    July 01, 2016

    Video Circuits

    Video Circuits Room at Brighton Modular Meet

    Alexander Peverett and I will be looking after a Video Synthesis room at Brighton Modular Meet this weekend. Come down to chat about video synthesis and our strange love for CRT tubes :) there will be at least 6 sony PVMs on display! along with a few systems and unusual bits of video gear. I am super excited that Nitin AKA Bradford Bahamas will be bringing his incredible playable CRT rig.

    brightonmodularmeet.co.uk
    Invited to be part of the 2016 International Conference on Live Interfaces (http://www.liveinterfaces.org/) at the University of Sussex, this years Brighton Modular Meet takes place in the newly renovated Attenborough Centre for the Creative Arts (formerly the Gardner Arts Centre). We will be taking over a number of rooms in this Grade 2 listed building designed by Sir Basil Spence.

    In addition to a large open access space where you can set up your synth and mingle with others, this year we will have a room for manufacturers and shops, a space for talks and live performances, and a video synthesis room.



    by Chris (noreply@blogger.com) at July 01, 2016 03:22 AM

    June 28, 2016

    Bunnie Studios

    Episode 4: Reinventing 35 years of Innovation

    Episode 4 is out!

    It’s a daunting challenge to document a phenomenon as diverse as Shenzhen, so I don’t envy the task of trying to fit it in four short episodes.

    Around 6:11 I start sounding like a China promo clip. This is because as a foreigner, I’m a bit cautious about saying negative things about a country, especially when I’m a guest of that country.

    I really love the part at 3:58 where Robin Wu, CEO of Meegopad, reflects on the evolution of the term Shanzhai in China:

    I was one of the people who made Shanzhai products. In the past, everyone looked down on Shanzhai products. Now, I think the idea of the maker is the same as Shanzhai. Shanzhai is not about copying. Shanzhai is a spirit.

    by bunnie at June 28, 2016 03:56 PM

    June 26, 2016

    ZeptoBARS

    Microchip HCS301 KeeLoq : weekend die-shot

    Microchip HCS301 is an old and popular code hopping encoder for remote controls.
    Die size 2510x1533 µm, 1µm technology.



    PS. Thanks Andrew for the chips!

    June 26, 2016 11:34 PM

    Video Circuits

    Raven Row Vasulka Talk

    Here is a shot from the talk, hopefully will have some more images and audio to share soon.

    by Chris (noreply@blogger.com) at June 26, 2016 03:27 AM

    June 21, 2016

    Bunnie Studios

    Episode 3: A New Breed of Intellectual Property

    Episode 3 is out!

    I say the darndest things on camera. O_o

    Like everyone else, I see the videos when they are released. So far, this episode makes the clearest case for why Shenzhen is the up-and-coming place for hardware technology.

    Most of the time my head is buried in resistors and capacitors. However, this video takes a wide-angle shot of the tech ecosystem. I’ve been visiting for over a decade, and this video is the first time I’ve seen some of the incredible things going on in Shenzhen, particularly in the corporate world.

    by bunnie at June 21, 2016 03:31 PM

    June 17, 2016

    Free Electrons

    Free Electrons contributes to KernelCI.org

    The Linux kernel is well-known for its ability to run on thousands of different hardware platforms. However, it is obviously impossible for the kernel developers to test their changes on all those platforms to check that no regressions are introduced. To address this problem, the KernelCI.org project was started: it tests the latest versions of the Linux kernel from various branches on a large number of hardware plaforms and provides a centralized interface to browse the results.

    KernelCI.org project

    KernelCI.org project

    From a physical point of view, KernelCI.org relies on labs containing a number of hardware platforms that can be remotely controlled. Those labs are provided by various organizations or individuals. When a commit in one of the Linux kernel Git branches monitored by KernelCI is detected, numerous kernel configurations are built, tests are sent to all labs and results are collected on the KernelCI.org website. This allows kernel developers and maintainers to detect and fix bugs and regressions before they reach users. As of May, 10th 2016, KernelCI stats show a pool of 185 different boards and around 1900 daily boots.

    Free Electrons is a significant contributor to the Linux kernel, especially in the area of ARM hardware platform support. Several of our engineers are maintainers or co-maintainers of ARM platforms (Grégory Clement for Marvell EBU, Maxime Ripard for Allwinner, Alexandre Belloni for Atmel and Antoine Ténart for Annapurna Labs). Therefore, we have a specific interest in participating to an initiative like KernelCI, to make sure that the platforms that we maintain continue to work well, and a number of the platforms we care about were not tested by the KernelCI project.

    Over the last few months, we have been building our boards lab in our offices, and we have joined the KernelCI project since April 25th. Our lab currently consists of 15 boards:

    • Atmel SAMA5D2 Xplained
    • Atmel SAMA5D3 Xplained
    • Atmel AT91SAM9X25EK
    • Atmel AT91SAM9X35EK
    • Atmel AT91SAMA5D36EK
    • Atmel AT91SAM9M10G45EK
    • Atmel AT91SAM9261EK
    • BeagleBone Black
    • Beagleboard-xM
    • Marvell Armada XP based Plathome Openblocks AX3
    • Marvell Armada 38x Solidrun ClearFog,
    • Marvell Armada 38x DB-88F6820-GP
    • Allwinner A13 Nextthing Co. C.H.I.P
    • Allwinner A33 Sinlinx SinA33
    • Freescale i.MX6 Boundary Devices Nitrogen6x

    We will very soon be adding 4 more boards:

    • Atmel SAMA5D4 Xplained
    • Atmel SAMA5D34EK
    • Marvell Armada 7K 7040-DB (ARM64)
    • Marvell Armada 39x DB

    Free Electrons board farm

    Three of the boards we have were already tested thanks to other KernelCI labs, but the other sixteen boards were not tested at all. In total, we plan to have about 50 boards in our lab, mainly for the ARM platforms that we maintain in the official Linux kernel. The results of all boots we performed are visible on the KernelCI site. We are proud to be part of this unique effort to perform automated testing and validation of the Linux kernel!

    In the coming weeks, we will publish additional articles to present the software and physical architecture of our lab and the program we developed to remotely control boards that are in our lab, so stay tuned!

    by Quentin Schulz at June 17, 2016 10:07 AM

    Buildroot training updated to Buildroot 2016.05

    Buildroot LogoAlmost exactly one year ago, we announced the availability of our training course on Buildroot. This course received very good feedback, both from our customers, and from the community.

    In our effort to continuously improve and update our training materials, we have recently updated our Buildroot training course to Buildroot 2016.05, which was released at the end of May. In addition to adapting our practical labs to use this new Buildroot version, we have also improved the training materials to cover some of the new features that have been added over the last year in Buildroot. The most important changes are:

    • Cover the graph-size functionality, which allows to generate a pie chart of the filesystem size, per package. This is a very nice feature to analyze the size of your root filesystem and see how to reduce it.
    • Improve the description about the local site method and the override source directory functionalities, that are very useful when doing active application/library development in Buildroot, or to package custom application/library code.
    • Add explanations about using genimage to create complete SD card images that are ready to be flashed.
    • Add explanations about the hash file that can be added to packages to verify the integrity of the source code that is downloaded before it gets built.

    The updated training materials are available on the training page: agenda (PDF), slides (PDF) and practical labs (PDF).

    Contact us if you would like to organize this training session in your company: we are available to deliver it worldwide.

    by Thomas Petazzoni at June 17, 2016 07:57 AM

    June 14, 2016

    Bunnie Studios

    Episode 2: Shenzhen and the Maker Movement

    Woooooo episode 2 is out!

    I wrote a post once about getting my phone’s screen fixed in Shenzhen. I’ve learned a lot from watching these phone repair guys do their thing in Shenzhen.

    This video shows most of the process, from splitting the bonded LCD/digitizer assembly using a cutting wire and a heated vacuum chuck, to rebonding, to removing bubbles in the LOCA (liquid optically clear adhesive) by way of a vacuum chamber. There’s also typically a UV curing step that was probably left out of the segment for time reasons. The whole video is a good watch, but if you’re short on time, the segment on repairing a screen starts at 12:36.

    by bunnie at June 14, 2016 07:23 PM

    June 13, 2016

    Bunnie Studios

    Name that Ware, June 2016

    The Ware for June 2016 is shown below.

    Thanks to Liwei from TinyMOS for contributing the ware. He found it on his way to school many years ago. The function of this board is probably an easy guess, so bonus points to anyone who has a convincing idea about the larger system this was once a part of.

    by bunnie at June 13, 2016 02:01 PM

    Winner, Name that Ware May 2016

    The Ware for May 2016 was guessed within the hour of posting — it’s an Antminer S1 (v1.4 mainboards) from BitMainTech.

    Tracing through the rapid-fire guesses and picking a winner was a bit of a convoluted process. Based on my primary criteria of awarding to the first person to home in on a make/model of a ware, the winner is Wouter’s post at 10:15PM (congrats, btw email me for your prize).

    However, if make/model isn’t guessed, I’d go with an alternate criteria of thoughtful analysis, which would give the prize to Richard Ames’ conclusion that it’s a cryptocurrency compute module posted at 10:06PM. However, even that decision is contracted by 0x3d’s post at 9:53PM, earlier than all the rest, that this is an ASIC cryptocoin miner — no make/model, but still the correct genre.

    Also, in response to Richard Ames’ question: HDB = Housing Development Board. It’s the colloquial term in Singapore for public housing, after the government agency in charge of managing public housing.

    by bunnie at June 13, 2016 02:01 PM

    June 11, 2016

    ZeptoBARS

    OPUS Microsystems OP-6111 - MEMS 2D scanner : weekend die-shot

    OPUS Microsystems OP-6111 is a sensor-less resonant 2D-tilting MEMS mirror for application requiring laser scanning (laser projectors, 3D scanning, photoresist exposure e.t.c). Electrostatic actuator.

    Die size 3688x3180 µm, mirror is 1000 µm in diameter.


    June 11, 2016 02:13 PM

    June 10, 2016

    Video Circuits

    Telechrome Special Effects Generator

    This is an interesting video effects system from 59, to put it into context quad video was only developed in the mid 50s, colour quad became available in 58, Telerecording/Kinescope film recording systems were still in use by the broadcast networks and the earliest machines to be discussed as "video synthesizers" were still a few years off.

    Whats interesting about this system is it's relatively open modular nature, something that became less common as the technology became more advanced. The Telechrome system (not to be confused with Baird's early colour tube experiments) contains a waveform generator, a switching amplifier and a control unit for selecting effects. The waveform generator would be used to generate electronic mattes analogous to film mattes and the switching amplifier would place two distinct video sources either side of the geometric form.

    Here is the control panel

     you can see each of the systems modules here




















    And here is the advertisement in full.

















    I also dug up a later system with a similarly modular aproach.























    Sources here:
    http://www.americanradiohistory.com/Archive-BC/BC-1959/1959-10-05-BC.pdf
    http://saltofamerica.com/contents/displayArticle.aspx?19_393

    by Chris (noreply@blogger.com) at June 10, 2016 12:20 AM

    June 08, 2016

    Bunnie Studios

    WIRED Documentary on Shenzhen

    WIRED is now running a multi-part video documentary on Shenzhen:

    This shoot was a lot of fun, and it was a great pleasure working with Posy and Jim. I think their talent as producer and director really show through. They also did a great job editing my off-the-cuff narratives. The spot in the video where I’m pointing out Samsung parts isn’t matched to the b-roll of Apple parts, but in their defense I was moving so fast through the market that Jim couldn’t capture all the things I was pointing at.

    I haven’t seen the whole documentary myself (I was just called in to give some tours of the market and answer a few questions in my hotel room), so I’m curious and excited to see where this is going! Especially because of the text chosen for printing during my Moore’s Law explanation at 3:13 — “ALL PROPRIETARY AND NO OPEN SOURCE MAKES INNOVATION A SLOW PROCESS.”

    :)

    by bunnie at June 08, 2016 06:33 AM

    June 06, 2016

    Harald Welte

    Recent public allegations against Jacob Appelbaum

    In recent days, various public allegations have been brought forward against Jacob Appelbaum. The allegations rank from plagiarism to sexual assault and rape.

    I find it deeply disturbing that the alleged victims are putting up the effort of a quite slick online campaign to defame Jakes's name, using a domain name consisting of only his name and virtually any picture you can find online of him from the last decade, and - to a large extent - hide in anonymity.

    I'm upset about this not because I happen to know Jake personally for many years, but because I think it is fundamentally wrong to bring up those accusations in such a form.

    I have no clue what is the truth or what is not the truth. Nor does anyone else who has not experienced or witnessed the alleged events first hand. I'd hope more people would think about that before commenting on this topic one way or another on Twitter, in their blogs, on mailing lists, etc. It doesn't matter what we believe, hypothesize or project based on a personal like or dislike of either the person accused or of the accusers.

    We don't live in the middle ages, and we have given up on the pillory for a long time (and the pillory was used after a judgement, not before). If there was illegal/criminal behavior, then our societies have a well-established and respected procedure to deal with such: It is based on laws, legal procedure and courts.

    So if somebody has a claim, they can and should seek legal support and bring those claims forward to the competent authorities, rather than starting what very easily looks like a smear campaign (whether it is one or not).

    Please don't get me wrong: I have the deepest respect and sympathies for victims of sexual assault or abuse - but I also have a deep respect for the legal foundation our societies have built over hundreds of years, and it's principles including the human right "presumption of innocence".

    No matter who has committed which type of crime, everyone deserve to receive a fair trial, and they are innocent until proven guilty.

    I believe nobody deserves such a public defamation campaign, nor does anyone have the authority to sentence such a verdict, not even a court of law. The Pillory was abandoned for good reasons.

    by Harald Welte at June 06, 2016 10:00 AM

    June 01, 2016

    Harald Welte

    Nuand abusing the term "Open Source" for non-free Software

    Back in late April, the well-known high-quality SDR hardware company Nuand published a blog post about an Open Source Release of a VHDL ADS-B receiver.

    I was quite happy at that time about this, and bookmarked it for further investigation at some later point.

    Today I actually looked at the source code, and more by coincidence noticed that the LICENSE file contains a license that is anything but Open Source: The license is a "free for evaluation only" license, and it is only valid if you run the code on an actual Nuand board.

    Both of the above are clearly not compatible with any of the well-known and respected definitions of Open Source, particularly not the official Open Source Definition of the Open Source Initiative.

    I cannot even start how much this makes me upset. This is once again openwashing, where something that clearly is not Free or Open Source Software is labelled and marketed as such.

    I don't mind if an author chooses to license his work under a proprietary license. It is his choice to do so under the law, and it generally makes such software utterly unattractive to me. If others still want to use it, it is their decision. However, if somebody produces or releases non-free or proprietary software, then they should make that very clear and not mis-represent it as something that it clearly isn't!

    Open-washing only confuses everyone, and it tries to market the respective company or product in a light that it doesn't deserve. I believe the proper English proverb is to adorn oneself with borrowed plumes.

    I strongly believe the community must stand up against such practise and clearly voice that this is not something generally acceptable or tolerated within the Free and Open Source software world. It's sad that this is happening more frequently, like recently with OpenAirInterface (see related blog post).

    I will definitely write an e-mail to Nuand management requesting to correct this mis-representation. If you agree with my posting, I'd appreciate if you would contact them, too.

    by Harald Welte at June 01, 2016 10:00 AM

    May 27, 2016

    Harald Welte

    Keynote at Black Duck Korea Open Source Conference

    I've been giving a keynote at the Black Duck Korea Open Source Conference yesterday, and I'd like to share some thoughts about it.

    In terms of the content, I spoke about the fact that the ultimate goal/wish/intent of free software projects is to receive contributions and for all of the individual and organizational users to join the collaborative development process. However, that's just the intent, and it's not legally required.

    Due to GPL enforcement work, a lot of attention has been created over the past ten years in the corporate legal departments on how to comply with FOSS license terms, particularly copyleft-style licenses like GPLv2 and GPLv3. However,

    License compliance ensures the absolute bare legal minimum on engaging with the Free Software community. While that is legally sufficient, the community actually wants to have all developers join the collaborative development process, where the resources for development are contributed and shared among all developers.

    So I think if we had more contribution and a more fair distribution of the work in developing and maintaining the related software, we would not have to worry so much about legal enforcement of licenses.

    However, in the absence of companies being good open source citizens, pulling out the legal baton is all we can do to at least require them to share their modifications at the time they ship their products. That code might not be mergeable, or it might be outdated, so it's value might be less than we would hope for, but it is a beginning.

    Now some people might be critical of me speaking at a Black Duck Korea event, where Black Duck is a company selling (expensive!) licenses to proprietary tools for license compliance. Thereby, speaking at such an event might be seen as an endorsement of Black Duck and/or proprietary software in general.

    Honestly, I don't think so. If you've ever seen a Black Duck Korea event, then you will notice there is no marketing or sales booth, and that there is no sales pitch on the conference agenda. Rather, you have speakers with hands-on experience in license compliance either from a community point of view, or from a corporate point of view, i.e. how companies are managing license compliance processes internally.

    Thus, the event is not a sales show for proprietary software, but an event that brings together various people genuinely interested in license compliance matters. The organizers very clearly understand that they have to keep that kind of separation. So it's actually more like a community event, sponsored by a commercial entity - and that in turn is true for most technology conferences.

    So I have no ethical problems with speaking at their event. People who know me, know that I don't like proprietary software at all for ethical reasons, and avoid it personally as far as possible. I certainly don't promote Black Ducks products. I promote license compliance.

    Let's look at it like this: If companies building products based on Free Software think they need software tools to help them with license compliance, and they don't want to develop such tools together in a collaborative Free Software project themselves, then that's their decision to take. To state using words of Rosa Luxemburg:

    Freedom is always the freedom of those who think different

    I may not like that others want to use proprietary software, but if they think it's good for them, it's their decision to take.

    by Harald Welte at May 27, 2016 01:00 AM

    May 26, 2016

    Harald Welte

    Osmocom.org GTP-U kernel implementation merged mainline

    Have you ever used mobile data on your phone or using Tethering?

    In packet-switched cellular networks (aka mobile data) from GPRS to EDGE, from UMTS to HSPA and all the way into modern LTE networks, there is a tunneling protocol called GTP (GPRS Tunneling Protocol).

    This was the first cellular protocol that involved transport over TCP/IP, as opposed to all the ISDN/E1/T1/FrameRelay world with their weird protocol stacks. So it should have been something super easy to implement on and in Linux, and nobody should have had a reason to run a proprietary GGSN, ever.

    However, the cellular telecom world lives in a different universe, and to this day you can be safe to assume that all production GGSNs are proprietary hardware and/or software :(

    In 2002, Jens Jakobsen at Mondru AB released the initial version of OpenGGSN, a userspace implementation of this tunneling protocol and the GGSN network element. Development however ceased in 2005, and we at the Osmocom project thus adopted OpenGGSN maintenance in 2016.

    Having a userspace implementation of any tunneling protocol of course only works for relatively low bandwidth, due to the scheduling and memory-copying overhead between kernel, userspace, and kernel again.

    So OpenGGSN might have been useful for early GPRS networks where the maximum data rate per subscriber is in the hundreds of kilobits, but it certainly is not possible for any real operator, particularly not at today's data rates.

    That's why for decades, all commonly used IP tunneling protocols have been implemented inside the Linux kernel, which has some tunneling infrastructure used with tunnels like IP-IP, SIT, GRE, PPTP, L2TP and others.

    But then again, the cellular world lives in a universe where Free and Open Source Software didn't exit until OpenBTS and OpenBSC changed all o that from 2008 onwards. So nobody ever bothered to add GTP support to the in-kernel tunneling framework.

    In 2012, I started an in-kernel implementation of GTP-U (the user plane with actual user IP data) as part of my work at sysmocom. My former netfilter colleague and current netfilter core team leader Pablo Neira was contracted to bring it further along, but unfortunately the customer project funding the effort was discontinued, and we didn't have time to complete it.

    Luckily, in 2015 Andreas Schultz of Travelping came around and has forward-ported the old code to a more modern kernel, fixed the numerous bugs and started to test and use it. He also kept pushing Pablo and me for review and submission, thanks for that!

    Finally, in May 2016, the code was merged into the mainline kernel, and now every upcoming version of the Linux kernel will have a fast and efficient in-kernel implementation of GTP-U. It is configured via netlink from userspace, where you are expected to run a corresponding daemon for the control plane, such as either OpenGGSN, or the new GGSN + PDN-GW implementation in Erlang called erGW.

    You can find the kernel code at drivers/net/gtp.c, and the userspace netlink library code (libgtpnl) at git.osmocom.org.

    I haven't done actual benchmarking of the performance that you can get on modern x86 hardware with this, but I would expect it to be the same of what you can also get from other similar in-kernel tunneling implementations.

    Now that the cellular industry has failed for decades to realize how easy and little effort would have been needed to have a fast and inexpensive GGSN around, let's see if now that other people did it for them, there will be some adoption.

    If you're interested in testing or running a GGSN or PDN-GW and become an early adopter, feel free to reach out to Andreas, Pablo and/or me. The osmocom-net-gprs mailing list might be a good way to discuss further development and/or testing.

    by Harald Welte at May 26, 2016 10:00 AM

    May 25, 2016

    Free Electrons

    Linux 4.6 released, with Free Electrons contributions

    Adelie PenguinThe 4.6 version of the Linux kernel was released last Sunday by Linus Torvalds. As usual, LWN.net had a very nice coverage of this development cycle merge window, highlighting the most significant changes and improvements: part 1, part 2 and part 3. KernelNewbies is now active again, and has a very detailed page about this release.

    On a total of 13517 non-merge commits, Free Electrons contributed for this release a total of 107 non-merge commits, a number slightly lower than our past contributions for previous kernel releases. That being said, there are still a few interesting contributions in those 107 patches. We are particular happy to see patches from all our eight engineers in this release, including from Mylène Josserand and Romain Perier, who just joined us mid-March! We also already have 194 patches lined-up for the next 4.7 release.

    Here are the highlights of our contributions to the 4.6 release:

    • Atmel ARM processors support
      • Alexandre Belloni and Boris Brezillon contributed a number of patches to improve and cleanup the support for the PMC (Power Management and Clocks) hardware block. As expected, this involved patching both clock drivers and power management code for the Atmel platforms.
    • Annapurna Labs Alpine platforms support
      • As a newly appointed maintainer of the Annapurna Labs ARM/ARM64 Alpine platforms, Antoine Ténart contributed the base support for the ARM64 Alpine v2 platform: base platform support and Device Tree, and an interrupt controller driver to support MSI-X
    • Marvell ARM processors support
      • Grégory Clement added initial support for the Armada 3700, a new Cortex-A53 based ARM64 SoC from Marvell, as well as a first development board using this SoC. So far, the supported features are: UART, USB and SATA (as well as of course timers and interrupts).
      • Thomas Petazzoni added initial support for the Armada 7K/8K, a new Cortex-A72 based ARM64 SoC from Marvell, as well as a first development board using this SoC. So far, UART, I2C, SPI are supported. However, due to the lack of clock drivers, this initial support can’t be booted yet, the clock drivers and additional support is on its way to 4.7.
      • Thomas Petazzoni contributed an interrupt controller driver for the ODMI interrupt controller found in the Armada 7K/8K SoC.
      • Grégory Clement and Thomas Petazzoni did a few improvements to the support of Armada 38x. Thomas added support for the NAND flash used on Armada 370 DB and Armada XP DB.
      • Boris Brezillon contributed a number of fixes to the Marvell CESA driver, which is used to control the cryptographic engine found in most Marvell EBU processors.
      • Thomas Petazzoni contributed improvements to the irq-armada-370-xp interrupt controller driver, to use the new generic MSI infrastructure.
    • Allwinner ARM processors support
      • Maxime Ripard contributed a few improvements to Allwinner clock drivers, and a few other fixes.
    • MTD and NAND flash subsystem
      • As a maintainer of the NAND subsystem, Boris Brezillon did a number of contributions in this area. Most notably, he added support for the randomizer feature to the Allwinner NAND driver as well as related core NAND subsystem changes. This change is needed to support MLC NANDs on Allwinner platforms. He also contributed several patches to continue clean up and improve the NAND subsystem.
      • Thomas Petazzoni fixed an issue in the pxa3xx_nand driver used on Marvell EBU platforms that prevented using some of the ECC configurations (such as 8 bits BCH ECC on 4 KB pages). He also contributed minor improvements to the generic NAND code.
    • Networking subsystem
      • Grégory Clement contributed an extension to the core networking subsystem that allows to take advantage of hardware capable of doing HW-controlled buffer management. This new extension is used by the mvneta network driver, useful for several Marvell EBU platforms. We expect to extend this mechanism further in the future, in order to take advantage of additional hardware capabilities.
    • RTC subsystem
      • As a maintainer of the RTC subsystem, Alexandre Belloni did a number of fixes and improvements in various RTC drivers.
      • Mylène Josserand contributed a few improvements to the abx80x RTC driver.
    • Altera NIOSII support
      • Romain Perier contributed two patches to fix issues in the kernel running on the Altera NIOSII architecture. The first one, covered in a previous blog post, fixed the NIOSII-specific memset() implementation. The other patch fixes a problem in the generic futex code.

    In addition, several our of engineers are maintainers of various platforms or subsystems, so they do a lot of work reviewing and merging the contributions from other kernel developers. This effort can be measured by looking at the number of patches on which they Signed-off-by, but for which they are not the author. Here are the number of patches that our engineered Signed-off-by, but for which they were not the author:

    • Alexandre Belloni, as the RTC subsystem maintainer and the Atmel ARM co-maintainer: 91 patches
    • Maxime Ripard, as the Allwinner ARM co-maintainer: 65 patches
    • Grégory Clement, as the Marvell EBU ARM co-maintainer: 45 patches
    • Thomas Petazzoni, simply resubmitting patches from others: 2 patches

    Here is the detailed list of our contributions to the 4.6 kernel release:

    by Thomas Petazzoni at May 25, 2016 07:10 AM

    May 23, 2016

    Bunnie Studios

    Name that Ware, May 2016

    The Ware for May 2016 is shown below.

    Xobs discovered this morsel of technology sitting in the junk pile at his HDB, and brought it into the office for me to have a look at. I hadn’t seen of these first-hand until then.

    Despite being basically a picture of two large hunks of metal, I’m guessing this ware will be identified within minutes of going up.

    by bunnie at May 23, 2016 01:20 PM

    Winner, Name that Ware April 2016

    Really great participation this month in Name that Ware!

    The Ware for April 2016 is a “LED-Handbrause” by miomare — in other words, a shower head with LEDs on the inside which tell you the temperature of the water. It has an integral paddlewheel that generates power for the circuitry via water flowing through the shower head, as evidenced by this more complete photo of the ware:

    It looks like LW was the first to guess the function of the ware, so congrats! email me for your prize. And thanks again to Philipp Gühring for submitting a ware that sparked so much interesting discussion!

    by bunnie at May 23, 2016 01:20 PM

    May 22, 2016

    Elphel

    Tutorial 02: Eclipse-based FPGA development environment for Elphel cameras

    Elphel cameras offer unique capabilities – they are high performance systems out of the box and have all the firmware and FPGA code distributed under GNU General Public Licenses making it possible for users to modify any part of the code. The project does not use any “black boxes” or encrypted modules, so it is simulated with the free software tools and user has access to every net in the design. We are trying to do our best to make this ‘hackability’ not just a theoretical possibility, but a practical one.

    Current camera FPGA project contains over 400 files under version control and almost 100K lines of HDL (Verilog) code, there are also constraints files, tool configurations, so we need to provide means for convenient navigation and modification of the project by the users.

    We are starting a series of tutorials to facilitate acquaintance with this project, and here is the first one that shows how to install and configure the software. This tutorial is made with a fresh Kubuntu 16.04 LTS distribution installed on a virtual machine – this flavor of GNU/Linux we use ourselves and so it is easier for us to help others in the case of problems, but it should be also easy to install it on other GNU/Linux systems.

    Later we plan to show how to navigate code and view/modify tool parameters with VDT plugin, run simulation and implementation tools. Next will be a “Hello world” module added to the camera code base, then some simple module that accesses the video memory.



    Video resolution is 1600×900 pixels, so full screen view is recommended.

    Download links for: video and captions.

    Running this software does not require to have an actual camera, so it may help our potential users to evaluate software capabilities and see if it matches their requirements before purchasing an actual hardware. We will also be able to provide remote access to the cameras in our office for experimenting with them.

    by Andrey Filippov at May 22, 2016 10:20 PM

    ZeptoBARS

    Silicon Labs Si8641 - quad channel digital isolator : weekend die-shot

    Silicon Labs Si8641 uses capacitive coupling to implement digital isolation (up to 5kV, this model 2.5kV) as speeds of up to 1 Mbps.
    This particular model (Si8641AB) contains 2 identical dies, apparently configured by bonding some of the pads on the sides.




    May 22, 2016 03:50 AM

    May 21, 2016

    Harald Welte

    Slovenian student sentenced for detecting TETRA flaws using OsmocomTETRA

    According to some news report, including this report at softpedia, a 26 year old student at the Faculty of Criminal Justice and Security in Maribor, Slovenia has received a suspended prison sentence for finding flaws in Slovenian police and army TETRA network using OsmocomTETRA

    As the Osmocom project leader and main author of OsmocomTETRA, this is highly disturbing news to me. OsmocomTETRA was precisely developed to enable people to perform research and analysis in TETRA networks, and to audit their safe and secure configuration.

    If a TETRA network (like any other network) is configured with broken security, then the people responsible for configuring and operating that network are to be blamed, and not the researcher who invests his personal time and effort into demonstrating that police radio communications safety is broken. On the outside, the court sentence really sounds like "shoot the messenger". They should instead have jailed the people responsible for deploying such an insecure network in the first place, as well as those responsible for not doing the most basic air-interface interception tests before putting such a network into production.

    According to all reports, the student had shared the results of his research with the authorities and there are public detailed reports from 2015, like the report (in Slovenian) at https://podcrto.si/vdor-v-komunikacijo-policije-razkril-hude-varnostne-ranljivosti-sistema-tetra/.

    The statement that he should have asked the authorities for permission before starting his research is moot. I've seen many such cases and you would normally never get permission to do this, or you would most likely get no response from the (in)competent authorities in the first place.

    From my point of view, they should give the student a medal of honor, instead of sentencing him. He has provided a significant service to the security of the public sector communications in his country.

    To be fair, the news report also indicates that there were other charges involved, like impersonating a police officer. I can of course not comment on those.

    Please note that I do not know the student or his research first-hand, nor did I know any of his actions or was involved in them. OsmocomTETRA is a Free / Open Source Software project available to anyone in source code form. It is a vital tool in demonstrating the lack of security in many TETRA networks, whether networks for public safety or private networks.

    by Harald Welte at May 21, 2016 10:00 PM

    May 20, 2016

    Video Circuits

    Video Synthesis Techniques

    So I am doing a talk at Raven Row as part of their very lovely exhibition of Steina & Woody Vasulka's work, will be demoing some of the techniques they used to make their work. Hopfully it will be interesting.

    "Chris King: Video Circuits
    Thursday 2 June, 6.30pm

    Artist Chris King leads a live demonstration of early media art and video synthesis technologies, working with a selection of different techniques used by the Vasulkas and other video artists during the 1970s and 80s.Chris King: Video Circuits
    Thursday 2 June, 6.30pm

    Artist Chris King leads a live demonstration of early media art and video synthesis technologies, working with a selection of different techniques used by the Vasulkas and other video artists during the 1970s and 80s."
    http://www.ravenrow.org/events/chris_king_video_circuits/

    by Chris (noreply@blogger.com) at May 20, 2016 03:34 AM

    DIY Colouriser

    Here are some snaps of the colouriser I built for my DIY system based on the board from the Visualist






    by Chris (noreply@blogger.com) at May 20, 2016 03:28 AM

    Seeing Sound

    Here are some shots from my talk, the video circuits screening, Alex's piece and Andrews performance at Seeing Sound, It was great fun. 








    by Chris (noreply@blogger.com) at May 20, 2016 03:23 AM

    May 16, 2016

    Free Electrons

    Linux kernel support for Microcrystal RTCs

    micro-crystalThanks to Microcrystal, a Switzerland-based real-time clock vendor, Free-Electrons has contributed support for a number of new I2C and SPI based real-time clocks to the Linux kernel over the last few months. More specifically, we added or improved support for the Microcrystal RV-1805, RV-4162, RV-3029 and RV-3049. In this blog post, we detail the contributions we have done to support those real-time clocks.

    RV-1805

    The RV-1805 RTC is similar to the Abracon 1805 one, for which a driver already existed in the Linux kernel. Therefore, the support for the RV-1805 RTC was added in the same driver, rtc-abx80x.c. The patch which adds the support of this RTC is already upstream since v4.5 (see Free-Electrons contributions to linux 4.5). In this kernel version, the support of the alarm has also been added. In the 4.6 kernel release, the support for two additional functionalities has been contributed: oscillator selection and handling of oscillator failure.

    The oscillator selection functionality allows to select between the two oscillators available in this RTC:

    • The XT oscillator, a more stable, but also more power-hungy oscillator
    • The RC oscillator, a less accurate, but also more power-efficient oscillator

    This patch adds the possibility to select which oscillator the RTC should use and also, a way to configure the auto-calibration (auto-calibration is a feature to calibrate the RC oscillator using the digital XT oscillator).

    To select the oscillator, a sysfs entry has been added:

    cat /sys/class/rtc/rtc0/device/oscillator
    

    To configure and activate the autocalibration, another sysfs entry has been added:

    cat /sys/class/rtc/rtc0/device/autocalibration
    

    Here is an example of using RC oscillator and an autocalibration of 512 seconds cycle.

    echo rc > /sys/class/rtc/rtc0/device/oscillator
    echo 512 > /sys/class/rtc/rtc0/device/autocalibration
    

    The other functionality that was added is handling the Oscillator Failure situation (see this patch). The Oscillator Failure is detected when the XT oscillator generates ticks at less than 8 kHz for more than 32 ms. In this case, the date and time can be wrong so an error is returned when an attempt to read the date from the RTC is made. This Oscillator Failure condition is cleared when a new date/time is set into the RTC.

    RV-4162

    The RV-4162 RTC is similar to ST M41T80 RTC family, so the existing driver has been used as well. However, as this driver was quite old, eight patches have been contributed to update the driver to newer APIs and to add new functionalities such as oscillator failure and alarm. The patches have already been merged by RTC maintainer Alexandre Belloni and should therefore find their way into the 4.7 Linux kernel release:

    See [PATCH 0/8] rtc: m41t80: update and add functionalities for the entire patch series. Thanks to this project, the RV-4162 is now supported in the Linux Kernel and the entire family of I2C-based M41T80 RTCs will benefit from these improvements.

    RV-3029 / RV-3049

    The RV-3029 RTC driver already existed in the Linux kernel, and the the RV-3049 is the same reference than the RV-3029 but it is an SPI-based interface instead of an I2C one. This is a typical case where the regmap mechanism of the Linux kernel is useful: it allows to abstract the register accesses, regardless of the bus being used to communicate with the hardware. Thanks to this, a single driver can easily handle two devices that are interfaced over different busses, but offering the same register set, which is the case with RV-3029 on I2C and RV-3049 on SPI.

    For this driver, some updates were needed to prepare the switch to using the regmap mechanism. Once the driver had been converted to regmap and worked as before, the RV-3049 support has been added. Finally, the alarm functionality has been added and fixed. The corresponding patches have already been merged by the RTC maintainer, and should therefore also be part of Linux 4.7:

    Conclusion

    It is great to see hardware vendors actively engaged in having support for their hardware in the upstream Linux kernel. This way, their users can immediately use the kernel version of their choice on their platform, without having to mess with outdated out-of-tree drivers. Thanks to Microcrystal for supporting this work!

    Do not hesitate to contact us if you would like to see your hardware supported in the official Linux kernel. Read more about our Linux kernel upstreaming services.

    by Mylène Josserand at May 16, 2016 03:23 PM

    May 11, 2016

    ZeptoBARS

    You can now support Zeptobars at Patreon and more


    We are running this blog for more than 3 years with no monetization of any kind (advertisements, merchandise and such) but from time to time people kept asking on how to help us. From the other side - we've probably reached the limits of our resources to improve the quality of our lab/imaging setup, and we'll need your help to move further.

    We've finally outlined a number of voluntary ways you can support our efforts to produce higher-quality microchip photographs for curiosity and education.

    Basically, there are 4 ways - spread the word, send us few cool chips for future work, support us at our Patreon campaign (which would allow you to schedule small contribution for each new die shot we publish) or send us some Bitcoins (or use good old Paypal).

    Either way content of this blog will remain free for everyone and it will continue to be licensed under permissive CC BY 3.0 license.

    May 11, 2016 04:36 AM

    May 10, 2016

    Elphel

    3D Print Your Camera Freedom

    Two weeks ago we were making photos of our first production NC393 camera to post an announcement of the new product availability. We got all the mechanical parts and most of the electronic boards (14MPix version will be available shortly) and put them together. Nice looking camera, powered by a high performance SoC (dual ARM plus FPGA), packaged in a lightweight aluminum extrusion body, providing different options for various environments – indoors, outdoors, on board of the UAV or even in the open space with no air (cooling is important when you run most of the FPGA resources at full speed). Tons of potential possibilities, but the finished camera did not seem too exciting – there are so many similar looking devices available.

    NC393 camera, front view

    NC393 camera, back panel view. Includes DC power input (12-36V and 20-75V options), GigE, microSD card (bootable), microUSB(type B) connector for a system console with reset and boot source selection, USB/eSATA combo connector, microUSB(type A) and 2.5mm 4-contact barrel connector for external synchronization I/O

    NC393 assembled boards: 10393(system board), 10385 (power supply board), 10389(interface board), 10338e (sensor board) and 103891 - synchronization adapter board, view from 10389. m.2 2242 SSD shown, bracket for the 2260 format provided. 10389 internal connectors include inter-camera synchronization and two of 3.3VDC+5.0VDC+I2C+USB ones.

    NC393 assembled boards: 10393(system board), 10385 (power supply board), 10389(interface board), 10338e (sensor board) and 103891 - synchronization adapter board, view from 10385

    10393 system board attached to the heat frame, view from the heat frame. There is a large aluminum heat spreader attached to the other side of the frame with thermal conductive epoxy that provides heat transfer from the CPU without the use of any spring load. Other heat dissipating components use heat pads.

    10393 system board attached to the heat frame, view from the 10393 board

    10393 system board, view from the processor side

    An obvious reason for our dissatisfaction is that the single-sensor camera uses just one of four available sensor ports. Of course it is possible to use more of the freed FPGA resources for a single image processing, but it is not what you can use out of the box. Many of our users buy camera components and arrange them in their custom setup themselves – that does not have a single-sensor limitation and it matches our goals – make it easy to develop a custom system, or sculpture the camera to meet your ideas as stated on our web site. We would like to open the cameras to those who do not have capabilities of advanced mechanical design and manufacturing or just want to try new camera ideas immediately after receiving the product.

    Why multisensor?

    One simple answer can be “because we can” – the CPU+FPGA based camera system can simultaneously handle multiple small image sensors we love – sensors perfected by the cellphone industry. Of course it is also possible to connect one large (high resolution/high FPS) sensor or even to use multiple camera system for one really fast sensor – we did such trick with the NC323 camera for book scanning, but we believe that the future is with the multiple view systems that combine images from several synchronized cameras using computational photography rather than large lens/large sensor traditional cameras.

    Multi-sensor systems can acquire high-resolution panoramic images in a single shot (or offer full sphere live video), they can be used for image-based 3-d reconstruction that in many cases provide much superior quality to the now traditional LIDAR-based scanners which can not output cinematographic quality 3-d scenes. They can be used to capture HDR video by combining data for the same voxels rather than pixels. Such systems can easily beat the shallow depth of field of the traditional large format cameras and offer possibility of the post-production focus distance adjustment. Applications are virtually endless, and while at Elphel we are developing such multi-sensor systems our main products are still the high-performance camera systems hackable at any imaginable level.

    Prototype of the 21-sensor 3D HDR 6K cinematographic camera

    Eyesis4π stereophotogrammetric camera

    NC353-369-PHG3 3-sensor camera camera, view demo (mouse scroll changes disparity, shift-scroll - zoom)

    Spherical view camera with two fish eye lenses

    Two sensor stereo camera with two interchangeable C/CS-mount lenses

    SCINI project underwater remotely operated vehicle (ROV)

    A helmet-mounted panoramic camera by HomeSide 720°

    Quadcopter using multisensor camera for navigation, by 'Autonomous Aerospace' team in Krasnoyarsk, Russia

    Multisensor R5 camera by Google

    Hackable by Design

    To have all documentation open and released under free licenses such as GNU GPL and CERN OHL is a precondition, but it is not sufficient. The hackable products must be designed to be used that way and we strive to provide this functionality to our users. This is true for the products themselves and for the required tools, so we had to go as far as to develop software for FPGA tools integration with the popular Eclipse IDE and replace closed source manufacturer code that is not compatible with the free software Verilog simulators.

    Same is true for the camera mechanical parts – users need to be able to reconfigure not just the firmware, FPGA code or rearrange the electronic components, but to change the physical layout of their systems. One popular solution to this challenge is to offer modular camera systems, but unfortunately this approach has its limits. It is similar to Lego® sets (where kids can assemble just one object already designed by the manufacturer) vs. Lego® bricks where the possibilities are limited by the imagination only. Often camera modularity is more about marketing (suggesting that you can start with a basic less expensive set and later buy more parts) than about the real user freedom.

    We too provide modular components and try to maintain compatibility between the generations of modules – new Elphel cameras can directly interface more than a decade old sensor boards and this does not prevent them from simultaneously supporting modern sensor interfaces. Physical dimensions and shapes of the camera electronic boards also remain the same – they just pack more performance in the same volume as newer components become available. Being in the business of developing hackable cameras for 15 years, we realize that the modularity alone is not a magic bullet. Luckily now there are other possibilities.

    3d printing camera parts

    3d printing process offers freedom in the material world but so far we were pessimistic about its use for the camera components where microns often matter. Some of the camera modules use invar (metal alloy that has almost zero thermal expansion coefficient at normal temperatures) elements to compensate for the thermal expansion, and the PLA plastic parts seem rather alien here. Nevertheless it is possible to meet the requirements of the camera mechanical design even with this material. In some cases it is sufficient to have precise and stable sensor/lens combination – sensor front end (SFE), small fluctuations in the mutual position/orientation of the individual SFE may be compensated using image data itself in the overlapping areas. It is possible to design composite structure that combines metal elements of simple shape (such as aluminum, thin wall stainless steel tubes or even small diameter invar rods) and printed elements of complex shape. Modern fiber-reinforced material for 3d-printing promise to improve mechanical stability and reduce thermal expansion of the finished parts.

    This technology perfectly fits to the hackable multi-sensor systems and fills important missing part of “sculpturing” the user camera. 3-d printing is slow and we can not print every camera, but that is not really needed. While we certainly can print some parts, we are counting that this technology is now available in  most parts of the world where we ship the products, and the parts can be manufactured by the end user. We anticipate that many of the customer designs being experimental by nature will need later modifications, building the parts by the user can save on the overseas shipments too.

    We count that the users will design their own parts, but we will try to make their job easier and provide modifiable design examples and fragments that can be used in their parts. This idea of incorporating 3-d printing technology into Elphel products is just 2 weeks old and we prepared several quick design prototypes to try it – below are some examples of our first generation of such camera parts.

    Panoramic camera with perfect stitching - it uses 2 center cameras to measure distances

    Stereo camera with 4 sensors having 1:3:2 bases providing all integer 1 to 6 multiples of 43mm in the lens pairs

    Rectangular arranged 4-sensor stereo camera, adjustable bases

    Short-base (48mm form center) 4-sensor camera

    Printed adapter for the SFE of the 4-sensor panoramic camera

    Printed adapter for the short-base 4-sensor camera

    Various 3-d printed camera parts

    It takes about 3 hours to print one SFE adapter

    Deliverables

    “3d print your camera freedom” – we really mean that. It is not about printing of a camera or its body. You can always get a complete camera in one of the available configurations packaged in a traditional all-metal body if it matches your ideas, printing just adds freedom to the mechanical design.

    We will continue to provide all the spectrum of the camera components such as assembled boards and sensor front ends as well as the complete cameras in multiple configurations. For the 3-d printed versions we will have the models and reusable design fragments posted online. We will be able to print some parts and ship the factory assembled cameras. In some cases we may be able to help with the mechanical design, but we try to avoid doing any custom design ourselves. We consider our job is done well if we are not needed to modify anything for the end user. Currently we use one of the proprietary mechanical CAD programs so we do not have fully editable models and can only provide exported STEP files of the complete parts and interface fragments that can be incorporated in the user designs.

    We would like to learn how to do this in FreeCAD – then it will be possible to provide the usable source files and detailed instructions how to customize them. FreeCAD environment can be used to create custom generator scripts in Python – this powerful feature helped us to convert all our mechanical design files into x3d models that can be viewed and navigated in the browser (video tutorial). This web based system proved to be not just a good presentation tool but to be more convenient for parts navigation than the CAD program itself, we use it ourselves regularly for that purpose.

    Maybe we’ll be able to find somebody who is both experienced in mechanical design in FreeCAD and interested in multi-sensor camera systems to cooperate on this project?

    by Andrey Filippov at May 10, 2016 07:31 PM

    ZeptoBARS

    ST HCF4056 - BCD to 7 segment : weekend die-shot

    ST HCF4056 is a CMOS BCD to 7 segment decoder/driver with strobed latch.


    May 10, 2016 03:17 PM

    May 08, 2016

    Andrew Zonenberg, Silicon Exposed

    Open Verilog flow for Silego GreenPak4 programmable logic devices

    I've written a couple of posts in the past few months but they were all for the blog at work so I figured I'm long overdue for one on Silicon Exposed.

    So what's a GreenPak?


    Silego Technology is a fabless semiconductor company located in the SF Bay area, which makes (among other things) a line of programmable logic devices known as GreenPak. Their 5th generation parts were just announced, but I started this project before that happened so I'm still targeting the 4th generation.

    GreenPak devices are kind of like itty bitty PSoCs - they have a mixed signal fabric with an ADC, DACs, comparators, voltage references, plus a digital LUT/FF fabric and some typical digital MCU peripherals like counters and oscillators (but no CPU).

    It's actually an interesting architecture - FPGAs (including some devices marketed as CPLDs) are a 2D array of LUTs connected via wires to adjacent cells, and true (product term) CPLDs are a star topology of AND-OR arrays connected by a crossbar. GreenPak, on the other hand, is a star topology of LUTs, flipflops, and analog/digital hard IP connected to a crossbar.

    Without further ado, here's a block diagram showing all the cool stuff you get in the SLG46620V:

    SLG46620V block diagram (from device datasheet)
    They're also tiny (the SLG46620V is a 20-pin 0.4mm pitch STQFN measuring 2x3 mm, and the lower gate count SLG46140V is a mere 1.6x2 mm) and probably the cheapest programmable logic device on the market - $0.50 in low volume and less than $0.40 in larger quantities.

    The Vdd range of GreenPak4 is huge, more like what you'd expect from an MCU than an FPGA! It can run on anything from 1.8 to 5V, although performance is only specified at 1.8, 3.3, and 5V nominal voltages. There's also a dual-rail version that trades one of the GPIO pins for a second power supply pin, allowing you to interface to logic at two different voltage levels.

    To support low-cost/space-constrained applications, they even have the configuration memory on die. It's one-time programmable and needs external Vpp to program (presumably Silego didn't want to waste die area on charge pumps that would only be used once) but has a SRAM programming mode for prototyping.

    The best part is that the development software (GreenPak Designer) is free of charge and provided for all major operating systems including Linux! Unfortunately, the only supported design entry method is schematic entry and there's no way to write your design in a HDL.

    While schematics may be fine for quick tinkering on really simple designs, they quickly get unwieldy. The nightmare of a circuit shown below is just a bunch of counters hooked up to LEDs that blink at various rates.

    Schematic from hell!
    As if this wasn't enough of a problem, the largest GreenPak4 device (the SLG46620V) is split into two halves with limited routing between them, and the GUI doesn't help the user manage this complexity at all - you have to draw your schematic in two halves and add "cross connections" between them.

    The icing on the cake is that schematics are a pain to diff and collaborate on. Although GreenPak schematics are XML based, which is a touch better than binary, who wants to read a giant XML diff and try to figure out what's going on in the circuit?

    This isn't going to be a post on the quirks of Silego's software, though - that would be boring. As it turns out, there's one more exciting feature of these chips that I didn't mention earlier: the configuration bitstream is 100% documented in the device datasheet! This is unheard of in the programmable logic world. As Nick of Arachnid Labs says, the chip is "just dying for someone to write a VHDL or Verilog compiler for it". As you can probably guess by from the title of this post, I've been busy doing exactly that.

    Great! How does it work?


    Rather than wasting time writing a synthesizer, I decided to write a GreenPak technology library for Clifford Wolf's excellent open source synthesis tool, Yosys, and then make a place-and-route tool to turn that into a final netlist. The post-PAR netlist can then be loaded into GreenPak Designer in order to program the device.

    The first step of the process is to run the "synth_greenpak4" Yosys flow on the Verilog source. This runs a generic RTL synthesis pass, then some coarse-grained extraction passes to infer shift register and counter cells from behavioral logic, and finally maps the remaining logic to LUT/FF cells and outputs a JSON-formatted netlist.

    Once the design has been synthesized, my tool (named, surprisingly, gp4par) is then launched on the netlist. It begins by parsing the JSON and constructing a directed graph of cell objects in memory. A second graph, containing all of the primitives in the device and the legal connections between them, is then created based on the device specified on the command line. (As of now only the SLG46620V is supported; the SLG46621V can be added fairly easily but the SLG46140V has a slightly different microarchitecture which will require a bit more work to support.)

    After the graphs are generated, each node in the netlist graph is assigned a numeric label identifying the type of cell and each node in the device graph is assigned a list of legal labels: for example, an I/O buffer site is legal for an input buffer, output buffer, or bidirectional buffer.

    Example labeling for a subset of the netlist and device graphs
    The labeled nodes now need to be placed. The initial placement uses a simple greedy algorithm to create a valid (although not necessarily optimal or even routable) placement:
    1. Loop over the cells in the netlist. If any cell has a LOC constraint, which locks the cell to a specific physical site, attempt to assign the node to the specified site. If the specified node is the wrong type, doesn't exist, or is already used by another constrained node, the constraint is invalid so fail with an error.
    2. Loop over all of the unconstrained cells in the netlist and assign them to the first unused site with the right label. If none are available, the design is too big for the device so fail with an error.
    Once the design is placed, the placement optimizer then loops over the design and attempts to improve it. A simulated annealing algorithm is used, where changes to the design are accepted unconditionally if they make the placement better, and with a random, gradually decreasing probability if they make it worse. The optimizer terminates when the design receives a perfect score (indicating an optimal placement) or if it stops making progress for several iterations. Each iteration does the following:
    1. Compute a score for the current design based on the number of unroutable nets, the amount of routing congestion (number of nets crossing between halves of the device), and static timing analysis (not yet implemented, always zero).
    2. Make a list of nodes that contributed to this score in some way (having some attached nets unroutable, crossing to the other half of the device, or failing timing).
    3. Remove nodes from the list that are LOC'd to a specific location since we're not allowed to move them.
    4. Remove nodes from the list that have only one legal placement in the device (for example, oscillator hard IP) since there's nowhere else for them to go.
    5. Pick a node from the remainder of the list at random. Call this our pivot.
    6. Find a list of candidate placements for the pivot:
      1. Consider all routable placements in the other half of the device.
      2. If none were found, consider all routable placements anywhere in the device.
      3. If none were found, consider all placements anywhere in the device even if they're not routable.
    7. Pick one of the candidates at random and move the pivot to that location. If another cell in the netlist is already there, put it in the vacant site left by the pivot.
    8. Re-compute the score for the design. If it's better, accept this change and start the next iteration.
    9. If the score is worse, accept it with a random probability which decreases as the iteration number goes up. If the change is not accepted, restore the previous placement.
    After optimization, the design is checked for routability. If any edges in the netlist graph don't correspond to edges in the device graph, the user probably asked for something impossible (for example, trying to hook a flipflop's output to a comparator's reference voltage input) so fail with an error.

    The design is then routed. This is quite simple due to the crossbar structure of the device. For each edge in the netlist:
    1. If dedicated (non-fabric) routing is used for this path, configure the destination's input mux appropriately and stop.
    2. If the source and destination are in the same half of the device, configure the destination's input mux appropriately and stop.
    3. A cross-connection must be used. Check if we already used one to bring the source signal to the other half of the device. If found, configure the destination to route from that cross-connection and stop.
    4. Check if we have any cross-connections left going in this direction. If they're all used, the design is unroutable due to congestion so fail with an error.
    5. Pick the next unused cross-connection and configure it to route from the source. Configure the destination to route from the cross-connection and stop.
    Once routing is finished, run a series of post-PAR design rule checks. These currently include the following:
    • If any node has no loads, generate a warning
    • If an I/O buffer is connected to analog hard IP, fail with an error if it's not configured in analog mode.
    • Some signals (such as comparator inputs and oscillator power-down controls) are generated by a shared mux and fed to many loads. If different loads require conflicting settings for the shared mux, fail with an error.
    If DRC passes with no errors, configure all of the individual cells in the netlist based on the HDL parameters. Fail with an error if an invalid configuration was requested.

    Finally, generate the bitstream from all of the per-cell configuration and write it to a file.

    Great, let's get started!

    If you don't already have one, you'll need to buy a GreenPak4 development kit. The kit includes samples of the SLG46620V (among other devices) and a programmer/emulation board. While you're waiting for it to arrive, install GreenPak Designer.

    Download and install Yosys. Although Clifford is pretty good at merging my pull requests, only my fork on Github is guaranteed to have the most up-to-date support for GreenPak devices so don't be surprised if you can't use a bleeding-edge feature with mainline Yosys.

    Download and install gp4par. You can get it from the Github repository.

    Write your HDL, compile with Yosys, P&R with gp4par, and import the bitstream into GreenPak Designer to program the target device. The most current gp4par manual is included in LaTeX source form in the source tree and is automatically built as part of the compile process. If you're just browsing, there's a relatively recent PDF version on my web server.

    If you'd like to see the Verilog that produced the nightmare of a schematic I showed above, here it is.

    Be advised that this project is still very much a work in progress and there are still a number of SLG46620V features I don't support (see the manual for exact details).

    I love it / it segfaulted / there's a problem in the manual!

    Hop in our IRC channel (##openfpga on Freenode) and let me know. Feedback is great, pull requests are even better,

    You're competing with Silego's IDE. Have they found out and sued you yet?

    Nope. They're fully aware of what I'm doing and are rolling out the red carpet for me. They love the idea of a HDL flow as an alternative to schematic entry and are pretty amazed at how fast it's coming together.

    After I reported a few bugs in their datasheets they decided to skip the middleman and give me direct access to the engineer who writes their documentation so that I can get faster responses. The last time I found a problem (two different parts of the datasheet contradicted each other) an updated datasheet was in my inbox and on their website by the next day. I only wish Xilinx gave me that kind of treatment!

    They've even offered me free hardware to help me add support for their latest product family, although I plan to get GreenPak4 support to a more stable state before taking them up on the offer.

    So what's next?


    Better testing, for starters. I have to verify functionality by hand with a DMM and oscilloscope, which is time consuming.

    My contact at Silego says they're going to be giving me documentation on the SRAM emulation interface soon, so I'm going to make a hardware-in-loop test platform that connects to my desktop and the Silego ZIF socket, and lets me load new bitstreams via a scriptable interface. It'll have FPGA-based digital I/O as well as an ADC and DAC on every device pin, plus an adjustable voltage regulator for power, so I can feed in arbitrary mixed-signal test waveforms and write PC-based unit tests to verify correct behavior.

    Other than that, I want to finish support for the SLG46620V in the next month or two. The SLG46621V will be an easy addition since only one pin and the relevant configuration bits have changed from the 46620 (I suspect they're the same die, just bonded out differently).

    Once that's done I'll have to do some more extensive work to add the SLG46140V since the architecture is a bit different (a lot of the combinatorial logic is merged into multi-function blocks). Luckily, the 46140 has a lot in common architecturally with the GreenPak5 family, so once that's done GreenPak5 will probably be a lot easier to add support for.

    My thanks go out to Clifford Wolf, whitequark, the IRC users in ##openfpga, and everyone at Silego I've worked with to help make this possible. I hope that one day this project will become mature enough that Silego will ship it as an officially supported extension to GreenPak Designer, making history by becoming the first modern programmable logic vendor to ship a fully open source synthesis and P&R suite.

    by Andrew Zonenberg (noreply@blogger.com) at May 08, 2016 09:21 AM

    May 07, 2016

    Altus Metrum

    Altos1.6.3

    AltOS 1.6.3 —

    Bdale and I are pleased to announce the release of AltOS version 1.6.3.

    AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, STMF042, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

    Version 1.6.3 adds idle mode to AltosDroid and has bug fixes for our host software on desktops, laptops an android devices along with BlueTooth support for Windows.

    1.6.3 is in Beta test for Android; if you want to use the beta version, join the AltosDroid beta program

    AltOS

    AltOS fixes:

    • Fix hardware flow control on TeleBT v3.0. RTS/CTS is wired backwards on this board, switch from using the hardware to driving these pins with software.

    AltosUI and TeleGPS Applications

    AltosUI and TeleGPS New Features:

    • Add BlueTooth support for Windows operating system. This supports connections to TeleBT over BlueTooth rather than just USB.

    AltosUI and TeleGPS Fixes:

    • Change Java detection and install on Windows. Detection is now done by looking for the 'javaw.exe' program, and installation by opening a browser on the java.com web site.

    • Delay polling while the Fire Igniters is visible to allow for TeleMega to report back complete status over the radio.

    • Disallow changing RF calibration numbers in the configuration UI. There's no good reason to change this from the field, and recovering is really hard if you haven't written down the right number.

    • Fix USB device discovery on Mac OS X El Capitan. This makes the connected Altus Metrum USB devices appear again.

    • Fix acceleration data presented in MonitorIdle mode for TeleMetrum v2.0 flight computers.

    AltosDroid

    AltosDroid new features:

    • Monitor Idle mode. Check state of flight computer while in idle mode over the radio link

    • Fire Igniters. Remotely fire ignires for recovery system ground tests.

    • Remote reboot. Cause the flight computer to reboot over the radio link. This provides a method for switching the flight computer from idle to flight mode without needing to reach the power switch.

    • Configurable frequency menu. Change the set of available frequencies and provide more descriptive names.

    AltosDroid bug fixes:

    • Don't set target location if GPS hasn't locked yet.

    • Fix saving target states so they can be reloaded when the application restarts. When the application is shut down and restarted, all previous target state information will be restored (including GPS position if available).

    • Fix crash on some Android devices for offline maps when changing the map scale or location.

    • Don't require USB OTG support. This kept the latest AltosDroid from being offered on devices without USB device support, although it can work without that just fine using BlueTooth.

    • Don't require bluetooth to be enabled. This allows the application to operate with USB devices or just show old data without turning on the bluetooth radio.

    • Recover old tracker positions when restarting application. This finally allows you to safely stop and restart the application without losing the last known location of any tracker.

    Documentation

    • Document TeleMega and EasyMega additional pyro channel continuity audio alert pattern.

    by keithp's rocket blog at May 07, 2016 01:18 AM

    May 02, 2016

    ZeptoBARS

    Maxim ds2401z - serial number chip : weekend die-shot

    Dallas Semiconductor/Maxim DS2401 is a factory pre-programmed silicon serial number chip.
    Right at the center of the die you can see 64-bit laser-trimmed ROM. Die size 1346x686 µm.


    May 02, 2016 11:29 PM

    May 01, 2016

    Harald Welte

    Developers wanted for Osmocom GSM related work

    Right now I'm feeling sad. I really shouldn't, but I still do.

    Many years ago I started OpenBSC and Osmocom in order to bring Free Software into an area where it barely existed before: Cellular Infrastructure. For the first few years, it was "just for fun", without any professional users. A FOSS project by enthusiasts. Then we got some commercial / professional users, and with them funding, paying for e.g. Holger and my freelance work. Still, implementing all protocol stacks, interfaces and functional elements of GSM and GPRS from the radio network to the core network is something that large corporations typically spend hundreds of man-years on. So funding for Osmocom GSM implementations was always short, and we always tried to make the best out of it.

    After Holger and I started sysmocom in 2011, we had a chance to use funds from BTS sales to hire more developers, and we were growing our team of developers. We finally could pay some developers other than ourselves from working on Free Software cellular network infrastructure.

    In 2014 and 2015, sysmocom got side-tracked with some projects where Osmocom and the cellular network was only one small part of a much larger scope. In Q4/2015 and in 2016, we are back on track with focussing 100% at Osmocom projects, which you can probably see by a lot more associated commits to the respective project repositories.

    By now, we are in the lucky situation that the work we've done in the Osmocom project on providing Free Software implementations of cellular technologies like GSM, GPRS, EDGE and now also UMTS is receiving a lot of attention. This attention translates into companies approaching us (particularly at sysmocom) regarding funding for implementing new features, fixing existing bugs and short-comings, etc. As part of that, we can even work on much needed infrastructural changes in the software.

    So now we are in the opposite situation: There's a lot of interest in funding Osmocom work, but there are few people in the Osmocom community interested and/or capable to follow-up to that. Some of the early contributors have moved into other areas, and are now working on proprietary cellular stacks at large multi-national corporations. Some others think of GSM as a fun hobby and want to keep it that way.

    At sysmocom, we are trying hard to do what we can to keep up with the demand. We've been looking to add people to our staff, but right now we are struggling only to compensate for the regular fluctuation of employees (i.e. keep the team size as is), let alone actually adding new members to our team to help to move free software cellular networks ahead.

    I am struggling to understand why that is. I think Free Software in cellular communications is one of the most interesting and challenging frontiers for Free Software to work on. And there are many FOSS developers who love nothing more than to conquer new areas of technology.

    At sysmocom, we can now offer what would have been my personal dream job for many years:

    • paid work on Free Software that is available to the general public, rather than something only of value to the employer
    • interesting technical challenges in an area of technology where you will not find the answer to all your problems on stackoverflow or the like
    • work in a small company consisting almost entirely only of die-hard engineers, without corporate managers, marketing departments, etc.
    • work in an environment free of Microsoft and Apple software or cloud services; use exclusively Free Software to get your work done

    I would hope that more developers would appreciate such an environment. If you're interested in helping FOSS cellular networks ahead, feel free to have a look at http://sysmocom.de/jobs or contact us at jobs@sysmocom.de. Together, we can try to move Free Software for mobile communications to the next level!

    by Harald Welte at May 01, 2016 10:00 PM

    April 30, 2016

    Bunnie Studios

    Circuit Classics — Sneak Peek!

    My first book on electronics was Getting Started with Electronics; to this day, I still imagine electrons as oval-shaped particles with happy faces because of its illustrations. So naturally, I was thrilled to find that the book’s author, Forrest Mims III, and my good friend Star Simpson joined forces to sell kit versions of classic circuits straight off the pages of Getting Started with Electronics. This re-interpretation of a classic as an interactive kit is perfect for today’s STEM curriculum, and I hope it will inspire another generation of engineers and hackers.

    I’m very lucky that Star sent me a couple early prototypes to play with. Today was a rainy Saturday afternoon, so I loaded a few tracks from Information Society’s Greatest Hits album (I am most definitely a child of the 80’s) and fired up my soldering iron for a walk down memory lane. I remembered how my dad taught me to bend the leads of resistors with pliers, to get that nice square look. I remembered how I learned to use masking tape and bent leads to hold parts in place, so I could flip the board over for soldering. I remembered doodling circuits on scraps of paper after school while watching Scooby-Doo cartoons on a massive CRT TV that took several minutes to warm up. Things were so much simpler back then …

    I couldn’t help but embellish a little bit. I added a socket for the chip on my Bargraph Voltage Indicator (when I see chips in sockets, I hear a little voice in my head whispering “hack me!” “fix me!” “reuse me!”), and swapped out the red LEDs for some high-efficiency white LEDs I happened to have on the shelf.

    I appreciated Star’s use of elongated pads on the DIP components, a feature not necessary for automated assembly but of great assistance to hand soldering.

    It works! Here I am testing the bargraph voltage indicator with a 3V coin cell on my (very messy) keyboard desk.

    Voilà! My rendition of a circuit classic. I think the photo looks kind of neat in inverse color.

    I really appreciate seeing a schematic printed on a circuit board next to its circuit. It reminds me that before Open Hardware, hardware was open. Schematics like these taught me that circuits were knowable; unlike the mysteries of quantum physics and molecular biology, virtually every circuit is a product of human imagination. That another engineer designed it, means any other engineer could understand it, given sufficient documentation. As a youth, I didn’t understand what these symbols and squiggles meant; but just knowing that a map existed set me on a path toward greater comprehension.

    Whether a walk down nostalgia lane or just getting started in electronics, Circuit Classics are a perfect activity for both young and old. If you want to learn more, check out Star Simpson’s crowdfunding campaign on Crowd Supply!

    by bunnie at April 30, 2016 04:19 PM

    Hacking Humble Bundle

    I’m very honored and proud to have one of my books offered as part of the Hacking Humble Bundle. Presented by No Starch Press, the Hacking Humble Bundle is offering several eBook titles for a “pay-what-you-feel” price, including my “Hacking the Xbox”, along with “Automate the Boring Stuff with Python”, “The Linux Command Line” and “The Smart Girl’s Guide to Privacy”. Of course, you can already download Hacking the Xbox for free, but if you opt to pay at least $15 you can get 9 more fantastic titles — check out all of them at the Humble Bundle page.

    One of the best parts about a humble bundle is you have a say in where your money goes.

    If you click on “Choose where your money goes” near checkout area, you’re presented with a set of sliders that let you pick how much money goes to charity, how much to the publisher, and how much as a tip to the Humble Bundle. For the Hacking Humble Bundle, the default charity is the EFF (you’re free to pick others if you want). For the record, I don’t get any proceeds from the Humble Bundle; I’m in it to support the EFF and No Starch.

    If you enjoyed Hacking the Xbox, this is a perfect opportunity to give back to a charitable organization that was instrumental in making it happen. Without the EFF’s counsel, I wouldn’t have known my rights. Knowledge is power, and their support gave me the courage I needed to stand up and assert my right to hack, despite imposing adversaries. To this day, the EFF continues to fight for our rights on the digital frontier, and we need their help more than ever. No Starch has also been a stalwart supporter of hackers; their founder, Bill Pollock, and his “Damn the Torpedoes, Full Speed Ahead” attitude toward publishing potentially controversial topics has enabled hackers to educate the world about relevant but edgy technical topics.

    If hacking interests you, it’s probably worth the time to check out the Hacking Humble Bundle and give a thought about what it’s worth to you. After all, you can “pay what you feel” and still get eBooks in return.

    by bunnie at April 30, 2016 03:49 PM

    April 26, 2016

    Free Electrons

    How we found that the Linux nios2 memset() implementation had a bug!

    NIOS II processorNiosII is a 32-bit RISC embedded processor architecture designed by Altera, for its family of FPGAs: Cyclone III, Cyclone IV, etc. Being a soft-core architecture, by using Altera’s Quartus Prime design software, you can adjust the CPU configuration to your needs and instantiate it into the FPGA. You can customize various parameters like the instruction or the data cache size, enable/disable the MMU, enable/disable an FPU, and so on. And for us embedded Linux engineers, a very interesting aspect is that both the Linux kernel and the U-Boot bootloader, in their official versions, support the NIOS II architecture.

    Recently, one of our customers designed a custom NIOS II platform, and we are working on porting the mainline U-Boot bootloader and the mainline Linux kernel to this platform. The U-Boot porting went fine, and quickly allowed us to load and start a Linux kernel. However, the Linux kernel was crashing very early with:

    [    0.000000] Linux version 4.5.0-00007-g1717be9-dirty (rperier@archy) (gcc version 4.9.2 (Altera 15.1 Build 185) ) #74 PREEMPT Fri Apr 22 17:43:22 CEST 2016
    [    0.000000] bootconsole [early0] enabled
    [    0.000000] early_console initialized at 0xe3080000
    [    0.000000] BUG: failure at mm/bootmem.c:307/__free()!
    [    0.000000] Kernel panic - not syncing: BUG!
    

    This BUG() comes from the __free() function in mm/bootmem.c. The bootmem allocator is a simple page-based allocator used very early in the Linux kernel initialization for the very first allocations, even before the regular buddy page allocator and other allocators such as kmalloc are available. We were slightly surprised to hit a BUG in a generic part of the kernel, and immediately suspected some platform-specific issue, like an invalid load address for our kernel, or invalid link address, or other ideas like this. But we quickly came to the conclusion that everything was looking good on that side, and so we went on to actually understand what this BUG was all about.

    The NIOS II memory initialization code in arch/nios2/kernel/setup.c does the following:

    bootmap_size = init_bootmem_node(NODE_DATA(0),
                                     min_low_pfn, PFN_DOWN(PHYS_OFFSET),
                                     max_low_pfn);
    [...]
    free_bootmem(memory_start, memory_end - memory_start);
    

    The first call init_bootmem_node() initializes the bootmem allocator, which primarily consists in allocating a bitmap, with one bit per page. The entire bootmem bitmap is set to 0xff via a memset() during this initialization:

    static unsigned long __init init_bootmem_core(bootmem_data_t *bdata,
            unsigned long mapstart, unsigned long start, unsigned long end)
    {
            [...]
            mapsize = bootmap_bytes(end - start);
            memset(bdata->node_bootmem_map, 0xff, mapsize);
            [...]
    }
    

    After doing the bootmem initialization, the NIOS II architecture code calls free_bootmem() to mark all the memory pages as available, except the ones that contain the kernel itself. To achieve this, the __free() function (which is the one triggering the BUG) clears the bits corresponding to the page to be marked as free. When clearing those bits, the function checks that the bit was previously set, and if it’s not the case, fires the BUG:

    static void __init __free(bootmem_data_t *bdata,
                            unsigned long sidx, unsigned long eidx)
    {
            [...]
            for (idx = sidx; idx  eidx; idx++)
                    if (!test_and_clear_bit(idx, bdata->node_bootmem_map))
                            BUG();
    }
    

    So to summarize, we were in a situation where a bitmap is memset to 0xff, but almost immediately afterwards, a function that clears some bits finds that some of the bits are already cleared. Sounds odd, doesn’t it?

    We started by double checking that the address of the bitmap was the same between the initialization function and the __free function, verifying that the code was not overwriting the bitmap, and other obvious issues. But everything looked alright. So we simply dumped the bitmap after it was initialized by memset to 0xff, and to our great surprise, we found that the bitmap was in fact initialized with the pattern 0xff00ff00 and not 0xffffffff. This obviously explained why we were hitting this BUG(): simply because the buffer was not properly initialized. At first, we really couldn’t believe this: how it is possible that something as essential as memset() in Linux was not doing its job properly?

    On the NIOS II platform, memset() has an architecture-specific implementation, available in arch/nios2/lib/memset.c. For buffers smaller than 8 bytes, this memset implementation uses a simple naive loop, iterating byte by byte. For larger buffers, it uses a more optimized implementation, using inline assembly. This implementation copies data per blocks of 4-bytes rather than 1 byte to speed-up the memset.

    We quickly tested a workaround that consisted in using the naive implementation for all buffer sizes, and it solved the problem: we had a booting kernel, all the way to the point where it mounts a root filesystem! So clearly, it’s the optimized implementation in assembly that had a bug.

    After some investigation, we found out that the bug was in the very first instructions of the assembly code. The following piece of assembly is supposed to create a 4-byte value that repeats 4 times the 1-byte pattern passed as an argument to memset:

    /* fill8 %3, %5 (c & 0xff) */
    "       slli    %4, %5, 8\n"
    "       or      %4, %4, %5\n"
    "       slli    %3, %4, 16\n"
    "       or      %3, %3, %4\n"
    

    This code takes as input in %5 the one-byte pattern, and is supposed to return in %3 the 4-byte pattern. It goes through the following logic:

    • Stores in %4 the initial pattern shifted left by 8 bits. Provided an initial pattern of 0xff, %4 should now contain 0xff00
    • Does a logical or between %4 and %5, which leads to %4 containing 0xffff
    • Stores in %3 the 2-byte pattern shifted left by 16 bits. %3 should now contain 0xffff0000.
    • Does a logical or between code>%3
    and %4, i.e between 0xffff0000 and 0xffff, which gives the expected 4-byte pattern 0xffffffff

    When you look at the source code, it looks perfectly fine, so our source code review didn’t spot the problem. However, when looking at the actual compiled code disassembled, we got:

    34:	280a923a 	slli	r5,r5,8
    38:	294ab03a 	or	r5,r5,r5
    3c:	2808943a 	slli	r4,r5,16
    40:	2148b03a 	or	r4,r4,r5
    

    Here r5 gets used for both %4 and %5. Due to this, the final pattern stored in r4 is 0xff00ff00 instead of the expected 0xffffffff.

    Now, if we take a look at the output operands, %4 is defined with the "=r" constraint, i.e an output operand. How to prevent the compiler from re-using the corresponding register for another operand? As explained in this document, "=r" does not prevent gcc from using the same register for an output operand (%4) and input operand (%5). By adding the constrainst & (in addition to "=r"), we tell the compiler that the register associated with the given operand is an output-only register, and so, cannot be used with an input operand.

    With this change, we get the following assembly output:

    34:	2810923a 	slli	r8,r5,8
    38:	4150b03a 	or	r8,r8,r5
    3c:	400e943a 	slli	r7,r8,16
    40:	3a0eb03a 	or	r7,r7,r8
    

    Which is much better, and correctly produces the 0xffffffff pattern when 0xff is provided as the initial 1-byte pattern to memset.

    In the end, the final patch only adds one character to adjust the inline assembly constraint and gets the proper behavior from gcc:

    diff --git a/arch/nios2/lib/memset.c b/arch/nios2/lib/memset.c
    index c2cfcb1..2fcefe7 100644
    --- a/arch/nios2/lib/memset.c
    +++ b/arch/nios2/lib/memset.c
    @@ -68,7 +68,7 @@ void *memset(void *s, int c, size_t count)
     		  "=r" (charcnt),	/* %1  Output */
     		  "=r" (dwordcnt),	/* %2  Output */
     		  "=r" (fill8reg),	/* %3  Output */
    -		  "=r" (wrkrega)	/* %4  Output */
    +		  "=&r" (wrkrega)	/* %4  Output only */
     		: "r" (c),		/* %5  Input */
     		  "0" (s),		/* %0  Input/Output */
     		  "1" (count)		/* %1  Input/Output */
    

    This patch was sent upstream to the NIOS II kernel maintainers:
    [PATCH v2] nios2: memset: use the right constraint modifier for the %4 output operand, and has already been applied by the NIOS II maintainer.

    We were quite surprised to find a bug in some common code for the NIOS II architecture: we were assuming it would have already been tested on enough platforms and with enough compilers/situations to not have such issues. But all in all, it was a fun debugging experience!

    It is worth mentioning that in addition to this bug, we found another bug affecting NIOS II platforms, in the asm-generic implementation of the futex_atomic_cmpxchg_inatomic() function, which was causing some preemption imbalance warnings during the futex subsystem initialization. We also sent a patch for this problem, which has also been applied already.

    by Romain Perier at April 26, 2016 03:03 PM

    April 22, 2016

    Elphel

    Tutorial 01: Access to Elphel camera documentation from 3D model

    We have created a short video tutorial to help our users navigate through 3D models of Elphel cameras. Cameras can be virtually taken apart and put back together which helps to understand the camera configuration and access information about every camera component. Please feel free to comment on the video quality and usefulness, as we are launching a series of tutorials about cameras, software modifications, FPGA development on 10393 camera board, etc. and we would like to receive feedback on them.



    Description:

    In this video we will show how the 3D model of Elphel NC393 camera can be used to view the camera, understand the components it is made of, take it apart and put back together, and get access to each part’s documentation.

    The camera model is made using X3Dom technology autogenerated from STEP files used for production.

    In your browser you can open the link to one of the camera assemblies from Elphel wiki page:

    The buttons on the right list all camera components.

    You can click on one of the buttons and the component will be selected on the model. Click again and the part will be selected without the rest of the model.
    From here, using the buttons at the bottom of the screen you can open the part in a new window.
    Or look for the part on Elphel wiki;
    Or hide the part and see the rest of the model;
    Eventually you can return to the whole model by clicking on the part button once more, or there is always a reset model button, at the top left corner.

    You can also select part by clicking on the part on the model.

    To deselect it click again;

    Right click removes the part, so you can get access to the insides of the camera.

    Once you have selected the part you can look for more information about it on Elphel wiki.

    For the selected board you can type the board name in the wiki search and get access to the description about the board, circuit diagram, parts list and PCB layout.

    All Elphel software is Free Software and distributed under GNU/GPL license as well as Elphel camera designs are open hardware, distributed under CERN open Hardware license.

    by olga at April 22, 2016 02:08 AM

    April 21, 2016

    Free Electrons

    Article on the CHIP in French Linux magazine

    Free Electrons engineer and Allwinner platform maintainer Maxime Ripard has written a long article presenting the Nextthing C.H.I.P platform in issue #18 of French magazine OpenSilicium, dedicated to open source in embedded systems. The C.H.I.P has even been used for the front cover of the magazine!

    OpenSilicium #18

    In this article, Maxime presents the C.H.I.P platform, its history and the choice of the Allwinner SoC. He then details how to set up a developer-friendly environment to use the board, building and flashing from scratch U-Boot, the kernel and a Debian-based root filesystem. Finally, he describes how to use Device Tree overlays to describe additional peripherals connected to the board, with the traditional example of the LED.

    OpenSilicium #18 CHIP article

    In the same issue, OpenSilicium also covers numerous other topics:

    • A feedback on the FOSDEM 2016 conference
    • Uploading code to STM32 microcontrollers: the case of STM32-F401RE
    • Kernel and userspace debugging with ftrace
    • IoT prototyping with Buildroot
    • RIOT, the free operating system for the IoT world
    • Interview of Cedric Bail, working on the Enligthenment Foundation Libraries for Samsung
    • Setup of Xenomai on the Zynq Zedboard
    • Decompression of 3R data stream using a VHDL-described circuit
    • Write a userspace device driver for a FPGA using UIO

    by Thomas Petazzoni at April 21, 2016 08:56 PM

    April 20, 2016

    Free Electrons

    Slides from the Embedded Linux Conference

    Two weeks ago, the entire Free Electrons engineering team (9 persons) attended the Embedded Linux Conference in San Diego. We had some really good time there, with lots of interesting talks and useful meetings and discussions.

    Tim Bird opening the conferenceDiscussion between Linus Torvalds and Dirk Hohndel

    In addition to attending the event, we also participated by giving 5 different talks on various topics, for which we are publishing the slides:

    Boris Brezillon, the new NAND Linux subsystem maintainer, presented on Modernizing the NAND framework: The big picture.

    Boris Brezillon's talk on the NAND subsystem

    Antoine Ténart presented on Using DT overlays to support the C.H.I.P’s capes.

    Antoine Tenart's talk on using DT overlays for the CHIP

    Maxime Ripard, maintainer of the Allwinner platform support in Linux, presented on Bringing display and 3D to the C.H.I.P computer.

    Maxime Ripard's talk on display and 3D for the CHIP

    Alexandre Belloni and Thomas Petazzoni presented Buildroot vs. OpenEmbedded/Yocto Project: a four hands discussion.

    Belloni and Petazzoni's talk on OpenEmbedded vs. Buildroot

    Thomas Petazzoni presented GNU Autotools: a tutorial.

    Petazzoni's tutorial on the autotools

    All the other slides from the conference are available from the event page as well as from eLinux.org Wiki. All conferences have been recorded, and the videos will hopefully be posted soon by the Linux Foundation.

    by Thomas Petazzoni at April 20, 2016 09:17 AM

    April 19, 2016

    Free Electrons

    Free Electrons engineer Boris Brezillon becomes Linux NAND subsystem maintainer

    Free Electrons engineer Boris Brezillon has been involved in the support for NAND flashes in the Linux kernel for quite some time. He is the author of the NAND driver for the Allwinner ARM processors, did several improvements to the NAND GPMI controller driver, has initiated a significant rework of the NAND subsystem, and is working on supporting MLC NANDs. Boris is also very active on the linux-mtd mailing list by reviewing patches from others, and making suggestions.

    Hynix NAND flash

    For those reasons, Boris was recently appointed by the MTD maintainer Brian Norris as a new maintainer of the NAND subsystem. NAND is considered a sub-subsystem of the MTD subsystem, and as such, Boris will be sending pull requests to Brian, who in turn is sending pull requests to Linus Torvalds. See this commit for the addition of Boris as a NAND maintainer in the MAINTAINERS file. Boris will therefore be in charge of reviewing and merging all the patches touching drivers/mtd/nand/, which consist mainly of NAND drivers. Boris has created a nand/next on Github, where he has already merged a number of patches that will be pushed to Brian Norris during the 4.7 merge window.

    We are happy to see one of our engineers taking another position as a maintainer in the kernel community. Maxime Ripard was already a co-maintainer of the Allwinner ARM platform support, Alexandre Belloni a co-maintainer of the RTC subsystem and Atmel ARM platform support, Grégory Clement a co-maintainer of the Marvell EBU platform support, and Antoine Ténart a co-maintainer of the Annapurna Labs platform support.

    by Thomas Petazzoni at April 19, 2016 07:59 AM

    April 16, 2016

    ZeptoBARS

    NXP/Philips BC857BS - dual pnp BJT : weekend die-shot

    SOT-363 package contains 2 separate identical transistor dies.
    Size of each die is 285x259 µm.


    April 16, 2016 02:35 AM

    April 12, 2016

    Free Electrons

    Slides from Collaboration Summit talk on Linux kernel upstreaming

    As we announced in a previous blog post, Free Electrons CTO Thomas Petazzoni gave a talk at the Collaboration Summit 2016 covering the topic of “Upstreaming hardware support in the Linux kernel: why and how?“.

    The slides of the talk are now available in PDF format.

    Upstreaming hardware support in the Linux kernel: why and how?

    Upstreaming hardware support in the Linux kernel: why and how?

    Upstreaming hardware support in the Linux kernel: why and how?

    Through this talk, we identified a number of major reasons that should encourage hardware vendors to contribute the support for their hardware to the upstream Linux kernel, and some hints on how to achieve that. Of course, within a 25 minutes time slot, it was not possible to get into the details, but hopefully the general hints we have shared, based on our significant Linux kernel upstreaming experience, have been useful for the audience.

    Unfortunately, none of the talks at the Collaboration Summit were recorded, so no video will be available for this talk.

    by Thomas Petazzoni at April 12, 2016 11:35 AM

    April 09, 2016

    Bunnie Studios

    Name that Ware, April 2016

    The Ware for April 2016 is shown below.

    The ware this month is courtesy Philipp Gühring. I think it should be a bit more challenging that the past couple months’ wares. If readers are struggling to guess this one by the end of this month, I’ve got a couple other photos Philipp sent which should give additional clues.

    But, interested to see what people think this is, with just this photo!

    by bunnie at April 09, 2016 11:22 AM

    April 03, 2016

    ZeptoBARS

    ST TS971 : weekend die-shot

    ST TS321 is a single 12 MHz R2R opamp in SOT23-5 package with low noise and low distortion.
    Die size 1079x799 µm.


    April 03, 2016 08:40 AM

    March 30, 2016

    Elphel

    Synchronizing Verilog, Python and C

    Elphel NC393 as all the previous camera models relies on the intimate cooperation of the FPGA programmed in Verilog HDL and the software that runs on a general purpose CPU. Just as the FPGA manufacturers increase the speed and density of their devices, so do the Elphel cameras. FPGA code consists of the hundreds of files, tens of thousand lines of code and is constantly modified during the lifetime of the product both by us and by our users to accommodate the cameras for their applications. In most cases, if it is not just a bug fix or minor improvement of the previously implemented functionality, the software (and multiple layers of it) needs to be aware of the changes. This is both the power and the challenge of such hybrid systems, and the synchronization of the changes is an important issue.

    Verilog parameters

    Verilog code of the camera consists of the parameterized modules, we try to use parameters and

    generate
    Verilog operators in most cases, but
    `define
    macros and
    `ifdef
    conditional directives are still used to switch some global options (like synthesis vs. compilation, various debug levels). Eclipse-based VDT that we use for the FPGA development is aware of the parameters, and when the code instantiates a parametrized module that has parameter-dependent widths of the ports, VDT verifies that the instance ports match the signals connected to them, and warns the developer if it is not the case. Many parameters are routed through the levels of the hierarchy so the deeper instances can be controlled from a single header file, making it obvious which parameters influence which modules operations. Some parameters are specified directly, while some have to be calculated – it is the case for the register address decoders of the same module instances for different channels. Such channels have the same relative address maps, but different base addresses. Most of the camera parameters (not counting the trivial ones where the module instance parameters are defined by the nature of the code) are contained in a single x393_parameters.vh header file. There are more than six hundred of them there and most influence the software API.

    Development cycle

    When implementing some new camera FPGA functionality, we start with the simulation – always. Sometimes very small changes can be applied to the code, synthesized and tested in the actual hardware, but it almost never works this way – bypassing the simulation step. So far all the simulation we use consit of the plain old Verilog test benches (such as this or that) – not even System Verilog. Most likely for simulating CPU+FPGA devices ideal would be the use the software programming language to model the CPU side of the SoC and keep Verilog (or VHDL who prefers it) to the FPGA. Something like cocotb may work, especially we are already manually translating Verilog into Python, but we are not there yet.

    Translaing Verilog to Python

    So the next step is as I just mentioned – manual translation of the Verilog tasks and functions used in simulation to Python that code that can run on the actual hardware. The result does not look extremely pythonian as I try to follow already tested Verilog code, but it is OK. Not all the translation is manual – we use a import_verilog_parameters.py module to “understand” the parameters defined in Verilog files (including simple arithmetic and logical operations used to generate derivative parameters/localparams in the Verilog code), get the values from the same source and so reduce the possibility to accidentally use old software with the modified FPGA implementation. As the parameters are known to the program at a run time and PyDev (running, btw, in the same Eclipse IDE as the VDT – just as a different “perspective”) can not catch the misspelled parameter names. So the program has an option to modify itself and generate pre-defines for each of the parameter. Only the top part of the vrlg module is human-generated, everything under line 120 is automatically generated (and has to be re-generated only after adding new parameters to the Verilog source).

    Hardware testing with Python programs

    When the Verilog code is manually translated (or while new parts of the code are being translated or developed from scratch) it is possible to operate the actual camera. The top module is still called test_mcntrl as it started with DDR3 memory calibration using Levenberg-Marquardt algorithm (luckily it needs to run just once – it takes camera 10 minutes to do the full calibration this way).

    This program keeps track of the Verilog parameters and macros, exposes all the functions (with the names not beginning with the underscore character), extracts docstrings from the code and combines it with the generated list of the function parameters and their default values, provides search/help for the functions with regexp (a must when there are hundreds of such functions). Next code ran in the camera:

    x393 +0.043s--> help w.*_sensor_r
    === write_sensor_reg16 ===
    defined in x393_sensor.X393Sensor, /usr/local/bin/x393_sensor.py: 496)
    Write i2c register in immediate mode
    @param num_sensor - sensor port number (0..3), or "all" - same to all sensors
    @param reg_addr16 - 16-bit register address (page+low byte, for MT9P006 high byte is an 8-bit slave address = 0x90)
    @param reg_data16 - 16-bit data to write to sensor register
         Usage: write_sensor_reg16 <num_sensor> <reg_addr16> <reg_data16>
    x393 +0.010s-->

    And the same one in PyDev console window of Eclipse IDE – “simulated” means that the program could not detect the FPGA and so it is not the target hardware:

    x393(simulated) +0.121s--> help w.*_sensor_r
    === write_sensor_reg16 ===
    defined in x393_sensor.X393Sensor, /home/andrey/git/x393/py393/x393_sensor.py: 496)
    Write i2c register in immediate mode
    @param num_sensor - sensor port number (0..3), or "all" - same to all sensors
    @param reg_addr16 - 16-bit register address (page+low byte, for MT9P006 high byte is an 8-bit slave address = 0x90)
    @param reg_data16 - 16-bit data to write to sensor register
         Usage: write_sensor_reg16 <num_sensor> <reg_addr16> <reg_data16>
    x393(simulated) +0.001s-->

    Python program was also used for the AHCI SATA controller initial development (before adding it was possible to add is as Linux kernel platform driver, but number of parameters there is much smaller, and most of the addresses are defined by the AHCI standard.

    Synchronizing parameters with the kernel drivers

    Next step is to update/redesign/develop the Linux kernel drivers to support camera functionality. Learning the lessons from the previous camera models (software was growing with the hardware incrementally) we are trying to minimize manual intervention into the process of synchronizing different layers of code (including the “hardware” one). Previous camera interface to the FPGA consisted of the hand-crafted files such as x353.h. It started from the x313.h (for NC313 – our first camera based on Axis CPU and Xilinx FPGA – same was used in NC323 that scanned many billions of book pages), was modified for the NC333 and later for our previous NC353 used in car-mounted panoramic cameras that captured most of the world’s roads.

    Each time the files were modified to accommodate the new hardware, it was always a challenge to add extra bits to the memory controller addresses, image frame widths and heights (they are now all 16-bit wide – enough for the multi-gigapixel sensors). With Python modules already knowing all the current values of the Verilog parameters that define software interface it was natural to generate the C files needed to interface the hardware in the same environment.

    Implementation of the register access in the FPGA

    The memory-mapped registers in the camera share the same access mechanism – they use MAXIGP0 (CPU master, general purpose, channel 0) AXI port available in SoC, generously mapped there to 1/4 of the whole 32-bit address range (0x40000000.0x7fffffff). While logically all the locations are 32-bit wide, some use just 1 byte or even no data at all – any write to such address causes defined action.

    Internally the commands are distributed to the target modules over a tree of byte-parallel buses that tolerate register insertion, at the endpoints they are converted to the parallel format by cmd_deser.v instances. The status data from the modules (sent by status_generate.v) is routed as messages (also in byte-parallel format to reduce the required FPGA routing resources) to a single block memory that can be read over the AXI by the CPU with zero delay. The status generation by the subsystems is individually programmed to be either on demand (in response to the write operation by the CPU) or automatically when the register data changes. While this write and read mechanism is common, the nature of the registers and data may be very different as the project combines many modules designed at different time for different purposes. All the memory mapped locations in the design fall into 3 categories:

    • Read only registers that allow to read status from the various modules, DMA pointers and other small data items.
    • Read/write registers – the ones where result of writing does not depend on any context. The full write register address range has a shadow memory block in parallel, so reading from that address will return the data that was last written there.
    • Write-only registers – all other registers where write action depends on the context. Some modules include large tables exposed through a pair of address/data locations in the address map, many other have independent bit fields with the corresponding “set” bit, so internal values are modified for only the selected field.

    Register access as C11 anonymous members

    All the registers in the design are 32-bit wide and are aligned to 4-byte ranges, even as not all of them use all the bits. Another common feature of the used register model is that some modules exist in multiple instances, each having evenly spaced base addresses, some of them have 2-level hierarchy (channel and sub-channel), where the address is a sum of the category base address, relative register address and a linear combination of the two indices.

    Individual C

    typedef
    is generated for each set of registers that have different meanings of the bit fields – this way it is possible to benefit from the compiler type checking. All the types used fit into the 32 bits, and as in many cases the same hardware register can accept alternative values for individual bit fields, we use unions of anonymous (to make access expressions shorter) bit-field structures.

    Here is a generated example of such typedef code (full source):

    // I2C contol/table data
    
    typedef union {
        struct {
              u32        tbl_addr: 8; // [ 7: 0] (0) Address/length in 64-bit words (<<3 to get byte address)
              u32                :20;
              u32        tbl_mode: 2; // [29:28] (3) Should be 3 to select table address write mode
              u32                : 2;
        }; 
        struct {
              u32             rah: 8; // [ 7: 0] (0) High byte of the i2c register address
              u32             rnw: 1; // [    8] (0) Read/not write i2c register, should be 0 here
              u32              sa: 7; // [15: 9] (0) Slave address in write mode
              u32            nbwr: 4; // [19:16] (0) Number of bytes to write (1..10)
              u32             dly: 8; // [27:20] (0) Bit delay - number of mclk periods in 1/4 of the SCL period
              u32    /*tbl_mode*/: 2; // [29:28] (2) Should be 2 to select table data write mode
              u32                : 2;
        }; 
        struct {
              u32         /*rah*/: 8; // [ 7: 0] (0) High byte of the i2c register address
              u32         /*rnw*/: 1; // [    8] (0) Read/not write i2c register, should be 1 here
              u32                : 7;
              u32            nbrd: 3; // [18:16] (0) Number of bytes to read (1..18, 0 means '8')
              u32           nabrd: 1; // [   19] (0) Number of address bytes for read (0 - one byte, 1 - two bytes)
              u32         /*dly*/: 8; // [27:20] (0) Bit delay - number of mclk periods in 1/4 of the SCL period
              u32    /*tbl_mode*/: 2; // [29:28] (2) Should be 2 to select table data write mode
              u32                : 2;
        }; 
        struct {
              u32  sda_drive_high: 1; // [    0] (0) Actively drive SDA high during second half of SCL==1 (valid with drive_ctl)
              u32     sda_release: 1; // [    1] (0) Release SDA early if next bit ==1 (valid with drive_ctl)
              u32       drive_ctl: 1; // [    2] (0) 0 - nop, 1 - set sda_release and sda_drive_high
              u32    next_fifo_rd: 1; // [    3] (0) Advance I2C read FIFO pointer
              u32                : 8;
              u32         cmd_run: 2; // [13:12] (0) Sequencer run/stop control: 0,1 - nop, 2 - stop, 3 - run 
              u32           reset: 1; // [   14] (0) Sequencer reset all FIFO (takes 16 clock pulses), also - stops i2c until run command
              u32                :13;
              u32    /*tbl_mode*/: 2; // [29:28] (0) Should be 0 to select controls
              u32                : 2;
        }; 
        struct {
              u32             d32:32; // [31: 0] (0) cast to u32
        }; 
    } x393_i2c_ctltbl_t;

    Some member names in the example above are commented out (like /*tbl_mode*/ in lines 398, 408 and 420). This is done so because some bit fields (in this case bits [29:28]) have the same meaning in all alternative structures, and auto-generating complex union/structure combinations to create a valid C code with each member having unique name would produce rather clumsy code. Instead this script makes sure that same named members really designate the same bit fields, and then makes them anonymous while preserving names for a human reader. The last member (u32 d32:32;) is added to each union making it possible to address each of them as an unsigned long variable without casting.

    And this is a snippet of the part of the generator code that produced it:

    def _enc_i2c_tbl_wmode(self):
        dw=[]
        dw.append(("rah",      vrlg.SENSI2C_TBL_RAH,    vrlg.SENSI2C_TBL_RAH_BITS, 0, "High byte of the i2c register address"))
        dw.append(("rnw",      vrlg.SENSI2C_TBL_RNWREG,                         1, 0, "Read/not write i2c register, should be 0 here"))
        dw.append(("sa",       vrlg.SENSI2C_TBL_SA,     vrlg.SENSI2C_TBL_SA_BITS,  0, "Slave address in write mode"))
        dw.append(("nbwr",     vrlg.SENSI2C_TBL_NBWR,   vrlg.SENSI2C_TBL_NBWR_BITS,0, "Number of bytes to write (1..10)"))
        dw.append(("dly",      vrlg.SENSI2C_TBL_DLY,    vrlg.SENSI2C_TBL_DLY_BITS, 0, "Bit delay - number of mclk periods in 1/4 of the SCL period"))
        dw.append(("tbl_mode", vrlg.SENSI2C_CMD_TAND,                           2, 2, "Should be 2 to select table data write mode"))
        return dw

    The vrlg.* values used above are in turn read from the x393_parameters.vh Verilog file:

    //i2c page table bit fields
        parameter SENSI2C_TBL_RAH =        0, // high byte of the register address
        parameter SENSI2C_TBL_RAH_BITS =   8,
        parameter SENSI2C_TBL_RNWREG =     8, // read register (when 0 - write register
        parameter SENSI2C_TBL_SA =         9, // Slave address in write mode
        parameter SENSI2C_TBL_SA_BITS =    7,
        parameter SENSI2C_TBL_NBWR =      16, // number of bytes to write (1..10)
        parameter SENSI2C_TBL_NBWR_BITS =  4,
        parameter SENSI2C_TBL_NBRD =      16, // number of bytes to read (1 - 8) "0" means "8"
        parameter SENSI2C_TBL_NBRD_BITS =  3,
        parameter SENSI2C_TBL_NABRD =     19, // number of address bytes for read (0 - 1 byte, 1 - 2 bytes)
        parameter SENSI2C_TBL_DLY =       20, // bit delay (number of mclk periods in 1/4 of SCL period)
        parameter SENSI2C_TBL_DLY_BITS=    8,

    Auto-generated files also include x393.h, it provides other constant definitions (like valid values for the bit fields) – lines 301..303, and function declarations to access registers. Names of the functions for read-only and write-only are derived from the address symbolic names by converting them to the lower case, the ones which deal with read/write registers have set_ and get_ prefixes attached.

    #define X393_CMPRS_CBIT_CMODE_JPEG18           0x00000000 // Color 4:2:0
    #define X393_CMPRS_CBIT_FRAMES_SINGLE          0x00000000 // Use single-frame buffer
    #define X393_CMPRS_CBIT_FRAMES_MULTI           0x00000001 // Use multi-frame buffer
    
    // Compressor control
    
    void               x393_cmprs_control_reg (x393_cmprs_mode_t d, int cmprs_chn);  // Program compressor channel operation mode
    void               set_x393_cmprs_status  (x393_status_ctrl_t d, int cmprs_chn); // Setup compressor status report mode
    x393_status_ctrl_t get_x393_cmprs_status  (int cmprs_chn);

    Register access functions are implemented with readl() and writel(), this is a corresponding section of the x393.c file:

    // Compressor control
    
    void               x393_cmprs_control_reg (x393_cmprs_mode_t d, int cmprs_chn)  {writel(d.d32, mmio_ptr + (0x1800 + 0x40 * cmprs_chn));} // Program compressor channel operation mode
    void               set_x393_cmprs_status  (x393_status_ctrl_t d, int cmprs_chn) {writel(d.d32, mmio_ptr + (0x1804 + 0x40 * cmprs_chn));} // Setup compressor status report mode
    x393_status_ctrl_t get_x393_cmprs_status  (int cmprs_chn)                       { x393_status_ctrl_t d; d.d32 = readl(mmio_ptr + (0x1804 + 0x40 * cmprs_chn)); return d; }

    There are two other header files generated from the same data, one (x393_defs.h) is just an alternative way to represent register addresses – instead of the getter and setter functions it defines the preprocessor macros:

    // Compressor control
    
    #define X393_CMPRS_CONTROL_REG(cmprs_chn) (0x40001800 + 0x40 * (cmprs_chn)) // Program compressor channel operation mode, cmprs_chn = 0..3, data type: x393_cmprs_mode_t (wo)
    #define X393_CMPRS_STATUS(cmprs_chn)      (0x40001804 + 0x40 * (cmprs_chn)) // Setup compressor status report mode, cmprs_chn = 0..3, data type: x393_status_ctrl_t (rw)

    The last generated file – x393_map.h uses the preprocessor macro format to provide a full ordered address map of all the available registers for all channels and sub-channels. It is intended to be used just as a reference for developers, not as an actual include file.

    Conclusions

    The generated code for Elphel NC393 camera is definitely very hardware-specific, its main purpose is to encapsulate as much as possible of the hardware interface details and so to reduce dependence of the higher layers of software on the modifications of the HDL code. Such tasks are common to other projects that involve CPU/FPGA tandems, and similar approach to organizing software/hardware interface may be useful there too.

    by Andrey Filippov at March 30, 2016 08:04 PM

    March 27, 2016

    Harald Welte

    You can now install a GSM network using apt-get

    This is great news: You can now install a GSM network using apt-get!

    Thanks to the efforts of Debian developer Ruben Undheim, there's now an OpenBSC (with all its flavors like OsmoBSC, OsmoNITB, OsmoSGSN, ...) package in the official Debian repository.

    Here is the link to the e-mail indicating acceptance into Debian: https://tracker.debian.org/news/755641

    I think for the past many years into the OpenBSC (and wider Osmocom) projects I always assumed that distribution packaging is not really something all that important, as all the people using OpenBSC surely would be technical enough to build it from the source. And in fact, I believe that building from source brings you one step closer to actually modifying the code, and thus contribution.

    Nevertheless, the project has matured to a point where it is not used only by developers anymore, and particularly also (god beware) by people with limited experience with Linux in general. That such people still exist is surprisingly hard to realize for somebody like myself who has spent more than 20 years in Linux land by now.

    So all in all, today I think that having packages in a Distribution like Debian actually is important for the further adoption of the project - pretty much like I believe that more and better public documentation is.

    Looking forward to seeing the first bug reports reported through bugs.debian.org rather than https://projects.osmocom.org/ . Once that happens, we know that people are actually using the official Debian packages.

    As an unrelated side note, the Osmocom project now also has nightly builds available for Debian 7.0, Debian 8.0 and Ubunut 14.04 on both i586 and x86_64 architecture from https://build.opensuse.org/project/show/network:osmocom:nightly. The nightly builds are for people who want to stay on the bleeding edge of the code, but who don't want to go through building everything from scratch. See Holgers post on the openbsc mailing list for more information.

    by Harald Welte at March 27, 2016 10:00 PM

    March 26, 2016

    ZeptoBARS

    ST TS321 - generic SOT23 opamp : weekend die-shot

    ST TS321 is a single opamp in SOT23-5 package designed to match and exceed industry standard LM358A and LM324 opamps.
    Die size 1270x735 µm.


    March 26, 2016 08:18 AM

    March 24, 2016

    Video Circuits

    Seeing Sound

    I will be giving a talk on some research I have been doing into early British video synthesis and  electronic video work at this years Seeing Sound. I will also be screening some work from some contemporary Video Circuits Regulars as part of the conference.
    www.seeingsound.co.uk Sign up here!


    by Chris (noreply@blogger.com) at March 24, 2016 08:59 AM

    March 22, 2016

    Bunnie Studios

    Formlabs Form 2 Teardown

    I don’t do many teardowns on this blog, as several other websites already do an excellent job of that, but when I was given the chance to take apart a Formlabs Form 2, I was more than happy to oblige. About three yeargalvos ago, I had posted a teardown of a Form 1, which I received as a Kickstarter backer reward. Today, I’m looking at a Form 2 engineering prototype. Now that the Form 2 is in full production, the prototypes are basically spare parts, so I’m going to unleash my inner child and tear this thing apart with no concern about putting it back together again.

    For regular readers of this blog, this teardown takes the place of March 2016’s Name that Ware — this time, I’m the one playing Name that Ware and y’all get to follow along as I adventure through the printer. Next month I’ll resume regular Name that Ware content.

    First Impressions

    I gave the Form 2 a whirl before tearing it into an irreparable pile of spare parts. In short, I’m impressed; the Form 2 is a major upgrade from the Form 1. It’s an interesting contrast to Makerbot. The guts of the Makerbot Replicator 2 are basically the same architecture as previous models, inheriting all the limitations of its previous incarnation.

    The Form 2 is a quantum leap forward. The product smells of experienced, seasoned engineers; a throwback to the golden days of Massachusetts Route 128 when DEC, Sun, Polaroid and Wang Laboratories cranked out quality American-designed gear. Formlabs wasn’t afraid to completely rethink, re-architect, and re-engineer the system to build a better product, making bold improvements to core technology. As a result, the most significant commonality between the Form 1 and the Form 2 is the iconic industrial design: an orange acrylic box sitting atop an aluminum base with rounded corners and a fancy edge-lit power button.

    Before we slip off the cover, here’s a brief summary of the upgrades that I picked up on while doing the teardown:

  • The CPU is upgraded from a single 72MHz ST Micro STM32F103 Cortex-M3 to a 600 MHz TI Sitara AM3354 Cortex A8, with two co-processors: a STM32F030 as a signal interface processor, and a STM32F373 as a real-time DSP on the galvo driver board.
  • This massive upgrade in CPU power leapfrogs the UI from a single push button plus monochrome OLED on the Form 1, to a full-color 4.3” capacitive touch screen on the Form 2.
  • The upgraded CPU also enables the printer to have built-in wifi & ethernet, in addition to USB. Formlabs thoughtfully combines this new TCP/IP capability with a Bonjour client. Now, computers can automatically discover and enumerate Form 2’s on the local network, making setup a snap.
  • The UI also makes better use of the 4 GB of on-board FLASH by adding the ability to “replay” jobs that were previously uploaded, making the printer more suitable for low volume production.
  • The galvanometers are full custom, soup-to-nuts. We’ll dig into this more later, but presumably this means better accuracy, better print jobs, and a proprietary advantage that makes it much harder for cloners to copy the Form 2.
  • The optics pathway is fully shrouded, eliminating dust buildup problems. A beautiful and much easier to clean AR-coated glass surface protects the internal optics; internal shrouds also limit the opportunity for dust to settle on critical surfaces.
  • The resin tray now features a heater with closed-loop control, for more consistent printing performance in cold New England garages in the dead of winter.
  • The resin tray is now auto-filling from an easy to install cartridge, enabling print jobs that require more resin than could fit in a single tank while making resin top-ups convenient and spill-free.
  • The peel motion is now principally lateral, instead of vertical.
  • The resin tank now features a stirrer. On the Form 1, light scattering would create thickened pools of partially cured resin near the active print region. Presumably the stirrer helps homogenize the resin; I also remember someone once mentioning the importance of oxygen to the surface chemistry of the resin tank.
  • There are novel internal photosensor elements that hint at some sort of calibration/skew correction mechanism
  • There’s a tilt sensor and manual mechanical leveling mechanism. A level tank prevents the resin from pooling to one side.
  • There are sensors that can detect the presence of the resin tank and the level of the resin. With all these new sensors, the only way a user can bork a print is to forget to install the build platform
  • Speaking of tank detection, the printer now remembers what color resin was used on a given tank, so you don’t accidentally spoil a clear resin tank with black resin
  • The power supply is now fully embedded; goodbye PSU failures and weird ground loop issues. It’s a subtle detail, but it’s the sort of “grown-up” thing that younger companies avoid doing because it complicates safety certification and requires compliance to elevated internal wiring and plastic flame retardance standards.
  • I’m also guessing there are a number of upgrades that are less obvious from a visual inspection, such as improvements to the laser itself, or optimizations to the printing algorithm.

    These improvements indicate a significant manpower investment on the part of Formlabs, and an incredible value add to the core product, as many of the items I note above would take several man-months to bring to production-ready status.

    Test Print

    As hinted from the upgrade list, the UI has been massively improved. The touchscreen-based UI features tech-noir themed iconography and animations that would find itself at home in a movie set. This refreshing attention to detail sets the Form 2’s UI apart from the utilitarian “designed-by-programmers-for-geeks” UI typical of most digital fabrication tools.


    A UI that would seem at home on a Hollywood set. Life imitating art imitating life.

    Unfortunately, the test print didn’t go smoothly. Apparently the engineering prototype had a small design problem which caused the resin tray’s identification contacts to intermittently short against the metal case during a peel operation. This would cause the bus shared between the ID chips on the resin tank and the filler cartridge to fail. As a result, the printer paused twice on account of a bogus “missing resin cartridge” error. Thankfully, the problem would eventually fix itself, and the print would automatically resume.


    Test print from the Form 2. The red arrow indicates the location of a hairline artifact from the print pausing for a half hour due to issues with resin cartridge presence detection.

    The test print came out quite nicely, despite the long pauses in printing. There’s only a slight, hairline artifact where the printer had stopped, so that’s good – if the printer actually does run out of resin, the printer can in fact pause without a major impact on print quality.

    Significantly, this problem is fixed in my production unit – with this unit, I’ve had no problems with prints pausing due to the resin cartridge ID issue. It looks like they tweaked the design of the sheet metal around the ID contacts, giving it a bit more clearance and effectively solving the problem. It goes to show how much time and resources are required to vet a product as complex as a 3D printer – with so many sensors, moving parts, and different submodules that have to fit together perfectly throughout a service life involving a million cycles of movement, it takes a lot of discipline to chase down every last detail. So far, my production Form 2 is living up to expectations.

    Removing the Outer Shell

    I love that the Form 2, like the Form 1, uses exclusively hex and torx drive fasteners. No crappy philips or slotted screws here! They also make extensive use of socket cap style, which is a perennial favorite of mine.

    Removing the outer shell and taking a look around, we continue to see evidence of thoughtful engineering. The cable assemblies are all labeled and color-coded; there’s comprehensive detail on chassis grounding; the EMI countermeasures are largely designed-in, as opposed to band-aided at the last minute; and the mechanical engineering got kicked up a notch.

    I appreciated the inclusion of an optical limit switch on the peel drive. The previous generation’s peel mechanism relied on a mechanical clutch with a bit of overdrive, which meant every peel cycle ended with a loud clicking sound. Now, it runs much more quietly, thanks to the feedback of the limit switch.


    Backside of the Form 2 LCD + touchscreen assembly.

    The touchpanel and display are mounted on the outer shell. The display is a DLC0430EZG 480×272 pixel TFT LCD employing a 24-bit RGB interface. I was a bit surprised at the use of a 30-pin ribbon cable to transmit video data between the electronics mainboard and the display assembly, as unshielded ribbon cables are notorious for unintentional RF emissions that complicate the certification process. However, a closer examination of the electronics around the ribbon cable reveal the inclusion of a CMOS-to-LVDS serdes IC on either side of the cable. Although this increases the BOM, the use of differential signaling greatly reduces the emissions footprint of the ribbon cable while improving signal integrity over an extended length of wire.

    Significantly, the capacitive touchpanel’s glass seems to be a full custom job, as indicated by the fitted shape with hole for mounting the power button. The controller IC for the touchpanel is a Tango C44 by PIXCIR, a fabless semiconductor company based out of Suzhou, China. It’s heartening to see that the market for capacitve touchpanels has commoditized to the point where a custom panel makes sense for a relatively low volume product. I remember trying to source captouch solutions back in 2008, just a couple years after the iPhone’s debut popularized capacitive multi-touch sensors. It was hard to get any vendor to return your call if you didn’t have seven figures in your annual volume estimate, and the quoted NRE for custom glass was likewise prohibitive.

    Before leaving the touchpanel and display subsection, I have to note with a slight chuckle the two reference designators (R22 and U4) that are larger than the rest. It’s a purely cosmetic mistake which I recognize because I’ve done it myself several times. From the look of the board, I’m guessing it was designed using Altium. Automatic ECOs in Altium introduce new parts with a goofy huge default designator size, and it’s easy to miss the difference. After all, you spend most of your time editing the PCB with the silkscreen layer turned off.

    The Electronics

    As an electronics geek, my attention was first drawn to the electronics mainboard and the galvanometer driver board. The two are co-mounted on the right hand side of the printer, with a single 2×8 0.1” header spanning the gap between the boards. The mounting seems to be designed for easy swapping of the galvanometer board.

    I have a great appreciation for Formlabs’ choice of using a Variscite SOM (system-on-module). I can speak from first-hand experience, having designed the Novena laptop, that it’s a pain in the ass to integrate a high speed CPU, DDR3 memory, and power management into a single board with complex mixed-signal circuitry. Dropping down a couple BGA’s and routing the DDR3 fly-by topology while managing impedance and length matching is just the beginning of a long series of headaches. You then get to look forward to power sequencing, hardware validation, software drivers, factory testing, yield management and a hundred extra parts in your supply chain. Furthermore, many of the parts involved in the CPU design benefit from economies of scale much larger than can be achieved from this one product alone.

    Thus while it may seem attractive from a BOM standpoint to eliminate the middleman and integrate everything into a single PCB, from a system standpoint the effort may not amortize until the current version of the product has sold a few thousand units. By using a SOM, Formlabs reduces specialized engineering staff, saves months on the product schedule, and gains the option to upgrade their CPU without having to worry about amortization.

    Furthermore, the pitch of the CPU and DDR3 BGAs are optimized for compact designs and assume a 6 or 8-layer PCB with 3 or 4-mil design rules. If you think about it, only the 2 square inches around the CPU and DRAM require these design rules. If the entire design is just a couple square inches, it’s no big deal to fab the entire board using premium design rules. However, the Form 2’s main electronics board is about 30 square inches. Only 2 square inches of this would require the high-spec design rules, meaning they would effectively be fabricating 28 square inches of stepper motor drivers using an 8-layer PCB with 3-mil design rules. The cost to fabricate such a large area of PCB adds up quickly, and by reducing the technology requirement of the larger PCB they probably make up decent ground on the cost overhead of the SOM.

    Significantly, Formlabs was very selective about what they bought from Variscite: the SOM contained neither Wifi nor FLASH memory, even though the SOM itself had provisions for both. These two modules can be integrated onto the mainboard without driving up technology requirements, so Formlabs opted to self-source these components. In essence, they kept Variscite’s mark-up limited to a bare minimum set of components. The maturity to pick and choose cost battles is a hallmark of an engineering team with experience working in a startup environment. Engineers out of large, successful companies are used to working with virtually limitless development budgets and massive purchasing leverage, and typically show less discretion when allocating effort to cost reduction.


    Mainboard assembly with SOM removed; back side of SOM is photoshopped into the image for reference.

    I also like that Formlabs chose to use eMMC FLASH, instead of an SD card, for data storage. It’s probably a little more expensive, but the supply chain for eMMC is a bit more reliable than commodity SD memory. As eMMC is soldered onto the board, J3 was added to program the memory chip after assembly. It looks like the same wires going to the SOM are routed to J3, so the mainboard is probably programmed before the SOM is inserted.

    Formlabs also integrates the stepper motor drivers into the mainboard, instead of using DIP modules like the Makerbot did until at least the Replicator’s Mighty Board Rev E. I think the argument I heard for the DIP modules was serviceability; however, I have to imagine the DIP modules are problematic for thermal management. PCBs are pretty good heatsinks, particularly those with embedded ground planes. Carving up the PCB into tiny modules appreciably increases the thermal resistance between the stepper motor driver and the air around it, which might actually drive up the failure rate. The layout of the stepper motor drivers on the Formlabs mainboard show ample provisions for heat to escape the chips into the PCB through multiple vias and large copper fills.


    Mainboard assembly with annotations according to the discussion in this post.

    Overall, the mainboard was thoughtfully designed and laid out; the engineering team (or engineer) was thinking at a system-level. They managed to escape the “second system effect” by restrained prioritization of engineering effort; just because they raised a pile of money didn’t mean they had to go re-engineer all the things. I also like that the entire layout is single-sided, which simplifies assembly, inspection and testing.

    I learned a lot from reading this board. I’ve often said that reading PCBs is better than reading a textbook for learning electronics design, which is part of the reason I do a monthly Name that Ware. For example, I don’t have extensive experience in designing motor controllers, so next time I need to design a stepper motor driver, I’m probably going to have a look at this PCB for ideas and inspiration – a trivial visual inspection will inform me on what parts they used, the power architecture, trace widths, via counts, noise isolation measures and so forth. Even if the hardware isn’t Open, there’s still a lot that can be learned just by looking at the final design.

    Now, I turn my attention to the galvanometer driver board. This is a truly exciting development! The previous generation used a fully analog driver architecture which I believe is based on an off-the-shelf galvanometer driver. A quick look around this PCB reveals that they’ve abandoned closing the loop in the analog domain, and stuck a microcontroller in the signal processing path. The signal processing is done by a STM32F373 – a 72 MHz, Cortex-M4 with FPU, HW division, and DSP extensions. Further enhancing its role as a signal processing element, the MCU integrates a triplet of 16-bit sigma-delta ADCs and 12-bit DACs. The board also has a smattering of neat-looking support components, such as a MCP42010 digital potentiometer, a fairly handsome OPA4376 precision rail-to-rail op amp, and a beefy LM1876 20W audio amplifier, presumably used to drive the galvanometer voice coils.

    The power for the audio amplifier is derived from a pair of switching regulators, a TPS54336A handling the positive rail, and an LTC3704 handling the negative rail. There’s a small ECO wire on the LTC3704 which turns off burst mode operation; probably a good idea, as burst mode would greatly increase the noise on the negative rail, and in this application standby efficiency isn’t a paramount concern. I’m actually a little surprised they’re able to get the performance they need using switching regulators, but with a 20W load that may have been the only practical option. I guess the switching regulator’s frequency is also much higher than the bandwidth of the galvos, so maybe in practice the switching noise is irrelevant. There is evidence of a couple of tiny SOT-23 LDOs scattered around the PCB to clean up the supplies going to sensitive analog front-end circuitry, and there’s also this curious combination of a FQD7N10L NFET plus MPC6L02 dual op-amp. It looks like they intended the NFET to generate some heat, given the exposed solder slug on the back side, which makes me think this could be a discrete pass-FET LDO of some type. There’s one catch: the MCP6L02 can only operate at up to 6V, and power inside the Form 2 is distributed at 24V. There’s probably something clever going on here that I’m not gathering from a casual inspection of the PCBs; perhaps later I’ll break out some oscope probes to see what’s going on.

    Overall, this ground-up redesign of the galvanometer driver should give Formlabs a strong technological foundation to implement tricks in the digital domain, which sets it apart from clones that still rely upon off-the-shelf fully analog galvanometer driver solutions.

    Before leaving our analysis of the electronics, let’s not forget the main power supply. It’s a Meanwell EPS-65-24-C. The power supply itself isn’t such a big deal, but the choice to include it within the chassis is interesting. Many, if not most, consumer electronic devices prefer to use external power bricks because it greatly simplifies certification. Devices that use voltages below 60V fall into the “easy” category for UL and CE certification. By pulling the power supply into the chassis, they are running line voltages up to 240V inside, which means they have to jump through IEC 60950-1 safety testing. It ups the ante on a number of things, including the internal wiring standards and the flame retardance of any plastics used in the assembly. I’m not sure why they decided to pull the power supply into the chassis; they aren’t using any fancy point-of-load voltage feedback to cancel out IR drops on the cable. My best guess is they felt it would either be a better customer experience to not have to deal with an external power brick, or perhaps they were bitten in the previous generation by flaky power bricks or ground loop/noise issues that sometimes plague devices that use external AC power supplies.

    The Mechanical Platform

    It turns out that my first instinct to rip out the electronics was probably the wrong order for taking apart the Form 2. A closer inspection of the base reveals a set of rounded rectangles that delineate the screws belonging to each physical subsystem within the device. This handy guide makes assembly (and repair) much easier.

    The central set of screws hold down the mechanical platform. Removing those causes the whole motor and optics assembly to pop off cleanly, giving unfettered access to all the electronics.

    I’m oddly excited about the base of the Form 2. It looks like just a humble piece of injection molded plastic. But this is an injection molded piece of plastic designed to withstand the apocalypse. Extensive ribbing makes the base extremely rigid, and resistant to warpage. The base is also molded using glass-filled polymer – the same tough stuff used to make Pelican cases and automotive engine parts. I’ve had the hots for glass-filled polymers recently, and have been itching for an excuse to use it in one of my designs. Glass-filled polymer isn’t for happy-meal toys or shiny gadgets, it’s tough stuff for demanding applications, and it has an innately rugged texture. I’m guessing they went for a bomb-proof base because anything less rigid would lead to problems keeping the resin tank level. Either that, or someone in Formlabs has the same fetish I have for glass filled polymers.

    Once removed from the base, the central mechanical chassis stands upright on its own. Inside this assembly is the Z-axis leadscrew for the build platform, resin level sensor, resin heater, peel motor, resin stirrer, and the optics engine.

    Here’s a close-up of the Z-stepper motor + leadscrew, resin level & temperature sensor, and resin valve actuator. The resin valve actuator is a Vigor Precision BO-7 DC motor with gearbox, used to drive a swinging arm loaded with a spring to provide the returning force. The arm pushes on the integral resin cartridge valve, which looks uncannily like the bite valve from a Camelback.

    The resin tank valve is complimented by the resin tank’s air vent, which also looks uncannily like the top of a shampoo bottle.

    My guess is Formlabs is either buying these items directly from the existing makers of Camelback and shampoo products, in which case First Sale Doctrine means any patent claims that may exist on these has been exhausted, or they have licensed the respective IP to make their own version of each.

    The resin level and temperature sensor assembly is also worth a closer look. It’s a PCB that’s mounted directly behind the resin tank, and in front of the Z-motor leadscrew.


    Backside of the PCB mounted directly behind the resin tank.

    It looks like resin level is measured using a TI FDC1004 capacitive liquid level sensor. I would have thought that capacitive sensing would be too fussy for accurate liquid level sensing, but after reading the datasheet for the FDC1004 I’m a little less skeptical. However, I imagine the sensor is extremely sensitive to all kinds of contamination, the least of which is resin splattered or dripped onto the sensor PCB.


    Detail of the sensor PCB highlighting the non-contact thermopile temperature sensor.

    The resin temperature sense mechanism is also quite interesting. You’ll note a little silvery square, shrouded in plastic, mounted on the PCB behind the resin tank. First of all, the plastic shroud on my unit is clearly a 3D printed piece done by another Formlabs printer. You can see the nubs from the support structure and striation artifacts from the buildup process. I love that they’re dogfooding and using their own products to prototype and test; it’s a bad sign if the engineering team doesn’t believe in their own product enough to use it themselves.

    Unscrewing the 3D printed shroud reveals a curious flip-chip CSP device, which I’m guessing is a TI TMP006 or TMP007 MEMS therompile. Although there are no part numbers on the chip, a quick read through the datasheet reveals a reference layout that is a dead ringer for the pattern on the PCB around the chip. Thermopiles can do non-contact remote temperature sensing, and it looks like this product has an accuracy of about +/-1 C between 0-60C. This explains the mystery of how they’re able to report the resin temperature on the UI without any sort of probe dipping into the resin tank.

    But then how do they heat it? Look under the resin tank mount, and we find another PCB.

    When I first saw this board, I thought its only purpose was to hold the leafspring contacts for the ID chip that helps track individual resin tanks and what color resin was used in them. Flip the PCB over, and you’ll see a curious pinkish tape covering the reverse surface.

    The pinkish tape is actually a thermal gap sealer, and peeling the tape back reveals that the PCB itself has a serpentine trace throughout, which means they are using the resistivity of the copper trace on the PCB itself as a heating mechanism for the resin.

    Again, I wouldn’t have guessed this is something that would work as well as it does, but there you have it. It’s a low-cost mechanism for controlling the temperature of the resin during printing. Probably the PCB material is the most expensive component, even more than the thermopile IR sensor, and all that’s needed to drive the heating element is a beefy BUK9277 NFET.

    I’ve been to the Formlabs offices in Boston, and it does get rather chilly and dry there in the winter, so it makes sense they would consider cold temperature as a variable that could cause printing problems on the Form 2.

    Cold weather isn’t a problem here in Singapore; however, persistent 90% humidity conditions is an issue. If I didn’t use my Form 1 for several weeks, the first print would always come out badly; usually I’d have to toss the resin in the tank and pour a fresh batch for the print to come out. I managed to solve this problem by placing a large pack of desiccant next to the resin tank, as well as using the shipping lid to try to seal out moisture. However, I’m guessing they have very few users in the tropics, so humidity-related print problems are probably going to be a unique edge case I’ll have to solve on my own for some time to come.

    The Optics Pathway

    Finally, the optics – I’m saving the best for last. The optics pathway is the beating heart of the Form 2.


    The last thing uncured resin sees before it turns into plastic.

    The first thing I noticed about the optics is the inclusion of a protective glass panel underneath the resin tank. In the Form 1, if the build platform happened to drip resin while the tank was removed, or if the room was dusty, you had the unenviable task of reaching into the printer to clean the mirror. The glass panel simplifies the cleaning operation while protecting sensitive optics from dust and dirt.

    I love that the protective glass has an AR coating. You can tell there’s an AR coating from the greenish tint of the reflections off the surface of the glass. AR coatings are sexy; if I had a singles profile, you’d see “the green glint of AR-coated glasses” under turn-ons. Of course, the coating is there for functional reasons – any loss of effective laser power due to reflections off of the protective glass would reduce printing efficiency.

    The contamination-control measures don’t just stop at a protective glass cover. Formlabs also provisioned a plastic shroud around the entire optics assembly.


    Bottom view of the mechanical platform showing the protective shrouds hiding the optics.

    Immediately underneath the protective glass sheet is a U-shaped PCB which I can only assume is used for some kind of calibration. The PCB features five phtoodetectors; one mounted in “plain sight” of the laser, and four mounted in the far corners on the reverse side of the PCB, with the detectors facing into the PCB, such that the PCB is obscuring the photodetectors. A single, small pinhole located in the center of each detector allows light to fall onto the obscured photodetectors. However, the size of the pinhole and the dimensional tolerance of the PCB is probably too large for this to be an absolute calibration for the printer. My guess is this is probably used as more of a coarse diagnostic to confirm laser power and range of motion of the galvanometers.

    Popping off the shroud reveals the galvanometer and laser assembly. The galvanometers sport a prominent Formlabs logo. They are a Formlabs original design, and not simply a relabeling of an off the shelf solution. This is a really smart move, especially in the face of increasing pressure from copycats. Focusing resources into building a proprietary galvo is a trifecta for Formlabs: they get distinguished print quality, reduced cost, and a barrier to competition all in one package. Contrast this to Formlabs’ decision to use a SOM for the CPU; if Formlabs can build their own galvo & driver board, they certainly had the technical capability to integrate a CPU into the mainboard. But in terms of priorities, improving the galvo is a much better payout.

    Readers unfamiliar with galvanometers may want to review a Name that Ware I did of a typical galvanometer a while back. In a nutshell, a typical galvanometer consists of a pair of voice coils rotating a permanent magnet affixed to a shaft. The shaft’s angle is measured by an optical feedback system, where a single light source shines onto a paddle affixed to the galvo’s shaft. The paddle alternately occludes light hitting a pair of photodetectors positioned behind the paddle relative to the light source.

    Now, here’s the entire Form 2 galvo assembly laid out in pieces.


    Close-up view of the photoemitter and detector arrangement.

    Significantly, the Form 2 galvo has not two, but four photodetectors, surrounding a single central light source. Instead of a paddle, a notch is cut into the shaft; the notch modulates the light intensity reaching the photodiodes surrounding the central light source according to the angle of the shaft.


    The notched shaft above sits directly above the photoemitter when the PCB is mated to the galvo body.

    This is quite different from the simple galvanometer I had taken apart previously. I don’t know enough about galvos to recognize if this is a novel technique, or what exactly is the improvement they hoped to get by using four photodiodes instead of two. With two photodiodes, you get to subtract out the common mode of the emitter and you’re left with the error signal representing the angle of the shaft: two variables solving for two unknowns. With four photodiodes, they can solve for a couple more unknowns – but what are they? Maybe they are looking to correct for alignment errors of the light source & photodetectors relative to the shaft, wobble due to imperfections in the bearings, or perhaps they’re trying to avoid a dead-spot in the response of the photodiodes as the shaft approaches the extremes of rotation. Or perhaps the explanation is as simple as removing the light-occluding paddle reduces the mass of the shaft assembly, allowing it to rotate faster, and four photodetectors was required to produce an accurate reading out of a notch instead of the paddle. When I reached out to Formlabs to ask about this, someone in the know responded that the new design is an improvement on three issues: more signal leading to an improved SNR, reduced impact of off-axis shaft motion, and reduced thermal drift due to better symmetry.

    This is the shaft plus bearings once it’s pulled out of the body of the galvo. The gray region in the middle is the permanent magnet, and it’s very strong.

    And this is staring back into the galvo with the shaft removed. You can see the edges of the voice coils. I couldn’t remove them from the housing, as they seem to be fixed in place with some kind of epoxy.

    Epilogue
    And there you have it – the Form 2, from taking off its outer metal case down to the guts of its galvanometers. It was a lot of fun tearing down the Form2, and I learned a lot while doing it. I hope you also enjoyed reading this post, and perhaps gleaned a couple useful bits of knowledge along the way.

    If you think Formlabs is doing cool stuff and solving interesting problems, good news: they’re hiring! They have new positions for a Software Lead and an Electrical Systems Lead. Follow the links for a detailed description and application form.

    by bunnie at March 22, 2016 05:58 PM

    Winner, Name that Ware February 2016

    The Ware for February 2016 was indeed a Commodore 65 prototype. As expected, the ware was quite easy to guess, and the prize goes to Philipp Mundhenk. Congrats, email me for your prize!

    Here’s an image of the full motherboard, and its boot screen:

    by bunnie at March 22, 2016 05:50 PM

    Free Electrons

    Free Electrons contributing Linux kernel initial support for Annapurna Labs ARM64 Platform-on-Chip

    Annapurna Labs LogoWe are happy to announce that on February 8th 2016 we submitted to the mainline Linux kernel the initial support for Annapurna Labs Alpine v2 Platform-on-Chip based on the 64-bit ARMv8 architecture.

    See our patch series:

    Annapurna Labs was founded in 2011 in Israel. Annapurna Labs provides 32-bit and 64-bit ARM products including chips and subsystems under the Alpine brand for the home NAS, Gateway and WiFi router equipment, see this page for details. The 32-bit version already has support in the official Linux kernel (see alpine.dtsi), and we have started to add support for the quad core 64-bit version, called Alpine v2, which brings significant performance for the home.

    This is our initial contribution and we plan to follow it with additional Alpine v2 functionality in the near future.

    by Thomas Petazzoni at March 22, 2016 05:38 AM

    March 18, 2016

    Elphel

    NAND flash support for Xilinx Zynq in U-Boot SPL

    Overview

    • Target board: Elphel 10393 (Xilinx Zynq 7Z030) with 1GB NAND flash
    • U-Boot final image files (both support NAND flash commands):
      • boot.bin – SPL image – loaded by Xilinx Zynq BootROM into OCM, no FSBL required
      • u-boot-dtb.img – full image – loaded by boot.bin into RAM
    • Build environment and dependencies (for details see this article) :


     

    The story

    First of all, Ezynq was updated to use the mainstream U-Boot to remove an extra agent (u-boot-xlnx) from the dependency chain. But since the flash driver for Xilinx Zynq hasn’t make it to the mainstream yet it was copied to Ezynq’s source tree for U-Boot. When building the source tree is copied over U-Boot source files. We will make a patch someday.

    Full image (u-boot-dtb.img)

    Next, the support for flash and commands was added to the board configuration for the full u-boot image. Required defines:

    include/configs/elphel393.h (from zynq-common.h in u-boot-xlnx):
    #define CONFIG_NAND_ZYNQ
    #ifdef CONFIG_NAND_ZYNQ
    #define CONFIG_CMD_NAND_LOCK_UNLOCK /*zynq driver doesn't have lock/unlock commands*/
    #define CONFIG_SYS_MAX_NAND_DEVICE 1
    #define CONFIG_SYS_NAND_SELF_INIT
    #define CONFIG_SYS_NAND_ONFI_DETECTION
    #define CONFIG_MTD_DEVICE
    #endif
    #define CONFIG_MTD

    NOTE: original Zynq NAND flash driver for U-Boot (zynq_nand.c) doesn’t have Lock/Unlock commands. Same applies to pl35x_nand.c in the kernel they provide. By design, on power on the NAND flash chip on 10393 is locked (write protected). While these commands were added to both drivers there’s no need for unlock in U-Boot as all of the writing will be performed from OS boot from either flash or micro SD card. Out there some designs with NAND flash do not have flash locked on power on.

    And configs/elphel393_defconfig:

    CONFIG_CMD_NAND=y

    There are few more small modifications to add the driver to the build – see ezynq/u-boot-tree. Anyways, it worked on the board. Easy. Type “nand” in u-boot terminal for available commands.

    SPL image (boot.bin)

    Then the changes for the SPL image were made.

    Currently U-Boot runs twice to build both images. For the SPL run it sets CONFIG_SPL_BUILD, the results are found in spl/ folder. So, in general, if one would like to build U-Boot with SPL supporting NAND flash for some other board he/she should check out common/spl/spl_nand.c for the required functions, they are:

    nand_spl_load_image()
    nand_init() /*no need if drivers/mtd/nand.c is included in the SPL build*/
    nand_deselect() /*usually an empty function*/

    And drivers/mtd/nand/ – for driver examples for SPL – there are not too many of them for some reason.

    For nand_init() I included drivers/mtd/nand.c – it calls board_nand_init() which is found in the driver for the full image – zynq_nand.c.

    Defines in include/configs/elphel393.h:

    #define CONFIG_SPL_NAND_ELPHEL393
    #define CONFIG_SYS_NAND_U_BOOT_OFFS 0x100000 /*look-up in dts!*/
    #define CONFIG_SPL_NAND_SUPPORT
    #define CONFIG_SPL_NAND_DRIVERS
    #define CONFIG_SPL_NAND_INIT
    #define CONFIG_SPL_NAND_BASE
    #define CONFIG_SPL_NAND_ECC
    #define CONFIG_SPL_NAND_BBT
    #define CONFIG_SPL_NAND_IDS
    /* Load U-Boot to this address */
    #define CONFIG_SYS_NAND_U_BOOT_DST CONFIG_SYS_TEXT_BASE
    #define CONFIG_SYS_NAND_U_BOOT_START CONFIG_SYS_NAND_U_BOOT_DST

    CONFIG_SYS_NAND_U_BOOT_OFFS 0x100000 – is the offset in the flash where u-boot-dtb.img is written – this is done in OS. The flash partitions are defined in the device tree for the kernel.

    Again a few small modifications (KConfigs and makefiles) to include everything in the build – see ezynq/u-boot-tree.

    NOTES:

    • Before boot.bin was about 60K (out of 192K available). After everything was included the size is 110K. Well, it fits and so the optimization can be done some time in the future for the driver to have only what is needed – init and read.
    • drivers/mtd/nand/nand_base.c – kzalloc would hang the board – had to change it in the SPL build.
    • drivers/mtd/nand/zynq_nand.c – added timeout for some flash functions (NAND_CMD_RESET) – addresses the case when the board has flash width configured (through MIO pins) but doesn’t carry flash or the flash cannot be detected for some reason. Not having timeout hangs such boards.

    Other Notes

    • With U-Boot moving to KBuild nobody knows what will happen to the CONFIG_EXTRA_ENV_SETTINGS – multi-line define.
    • Current U-Boot uses a stripped down device tree – added to Ezynq.
    • The ideal scenario is to boot from SPL straight to OS – the falcon mode (CONFIG_SPL_OS_BOOT). Consider in future.
    • Tertiary Program Loader (TPL) – no plans.

     

    by Oleg Dzhimiev at March 18, 2016 11:40 PM

    Free FPGA: Reimplement the primitives models

    We added the AHCI SATA controller Verilog code to the rest of the camera FPGA project, together they now use 84% of the Zynq slices. Building the FPGA bitstream file requires proprietary tools, but all the simulation can be done with just the Free Software – Icarus Verilog and GTKWave. Unfortunately it is not possible to distribute a complete set of the files needed – our code instantiates a few FPGA primitives (hard-wired modules of the FPGA) that have proprietary license.

    Please help us to free the FPGA devices for developers by re-implementing the primitives as Verilog modules under GNU GPLv3+ license – in that case we’ll be able to distribute a complete self-sufficient project. The models do not need to provide accurate timing – in many cases (like in ours) just the functional simulation is quite sufficient (combined with the vendor static timing analysis). Many modules are documented in Xilinx user guides, and you may run both the original and replacement models through the simulation tests in parallel, making sure the outputs produce the same signals. It is possible that such designs can be used as student projects when studying Verilog.

    Models we are looking for

    The camera project includes more than 200 Verilog files, and it depends on just 29 primitives from the Xilinx simulation library (total number of the files there is 214):

    • BUFG.v
    • BUFH.v
    • BUFIO.v
    • BUFMR.v
    • BUFR.v
    • DCIRESET.v
    • GLBL.v
    • IBUF.v
    • IBUFDS_GTE2.v
    • IBUFDS.v
    • IDELAYCTRL.v
    • IDELAYE2_FINEDELAY.v
    • IDELAYE2.v
    • IOBUF_DCIEN.v
    • IOBUF.v
    • IOBUFDS_DCIEN.v
    • ISERDESE1.v *
    • MMCME2_ADV.v
    • OBUF.v
    • OBUFT.v
    • OBUFTDS.v
    • ODDR.v
    • ODELAYE2_FINEDELAY.v
    • OSERDESE1.v *
    • PLLE2_ADV.v
    • PS7.v
    • PULLUP.v
    • RAMB18E1.v
    • RAMB36E1.v

    This is just a raw list of the unisims modules referenced in the design, it includes PS7.v – a placeholder model of the ARM processing system, modules for AXI functionality simulation are already included in the project. The implementation is incomplete, but sufficient for the the camera simulation and can be used for other Zynq-based projects. Some primitives are very simple (like DCIRESET), some are much more complex. Two modules (ISERDESE1.v and OSERDESE1.v) in the project are the open-source replacements for the encrypted models of the enhanced hardware in Zynq (ISERDESE2.v and OSERDESE2.v) – we used a simple ifdef wrapper that selects reduced (but sufficient for us) functionality of the earlier open source model for simulation and the current “black box” for synthesis.

    The files list above includes all the files we need for our current project, as soon as the Free Software replacement will be available we will be able to distribute the self-sufficient project. Other FPGA development projects may need other primitives, so ideally we would like to see all of the primitives to have free models for simulation.

    Why is it important

    Elphel is developing high-performance products based on the FPGA desings that we believe are created for Freedom. We share all the code with our users under GNU General Public License version 3 (or later) but the project depends on proprietary tools distributed by vendors who have monopoly on the tools for their silicon.

    There are very interesting projects (like icoBOARD) that use smaller devices with completely Free toolchain (Yosys), but the work of those developers is seriously complicated by non-cooperation of the FPGA vendors. I hope that in the future there will be laws that will limit the monopoly of the device manufacturers and require complete documentation for the products they release to the public. There are advanced patent laws that can protect the FPGA manufacturers and their inventions from the competitors, there is no real need for them to fight against their users by hiding the documentation for the products.

    Otherwise this secrecy and “Security through Obscurity” will eventually (and rather soon) lead to a very insecure world where all those self-driving cars, “smart homes” will obey not us, but just the “bad guys” as the current software malware will get to even deeper hardware level. It is very naive to believe that they (the manufacturers) are ultimate masters and have the complete control of “their” devices of ever growing complexity. Unfortunately they do not realize this and are still living in the 20-th century dreams, treating their users as kids who can only play with “Lego blocks” and believe in powerful Wizards who pretend to know everything.

    We use proprietary toolchain for implementation, but exclusively Free tools – for simulation

    Our projects require devices that are more advanced than those that already can be programmed with independently designed Free Software tools, so we have to use the proprietary ones. Freeing the simulation seems to be achievable, and we made a step in this direction – made the whole project simulation possible with the Free Software. Working with the HDL code and simulating it takes most part of the FPGA design cycle, in our experience it is 2/3 – 3/4, and only the remaining part involves running the toolchain and test/troubleshoot the hardware. The last step (hardware troubleshooting) can also be done without any proprietary software – we never used any in this project that utilizes most of the Xilinx Zynq FPGA resources. Combination of the Verilog modules and extensible Python programs that run on the target devices proved to be a working and convenient solution that keeps the developer in the full control of the process. These programs read the Verilog header files with parameter definitions to synchronize register and bit fields addresses between the hardware and the software that uses them.

    Important role of the device primitives models

    Modern FPGA include many hard-wired embedded modules that supplement the uniform “sea of gates” – addition of such modules significantly increases performance of the device while preserves its flexibility. The modules include memory blocks, DSP slices, PLL circuits, serial-to-parallel and parallel-to-serial converters, programmable delays, high-speed serial transceivers, processor cores and more. Some modules can be automatically extracted by the synthesis software from the source HDL code, but in many cases we have to directly instantiate such primitives in the code, and this code now directly references the device primitives.

    The less of the primitives are directly instantiated in the project – the more portable (not tied to a particular FPGA architecture) it is, but in some cases synthesis tools (they are proprietary, so not fixable by the users) incorrectly extract the primitives, in other – the module functionality is very specific to the device and the synthesis tool will not even try to recognize it in the behavioral Verilog code.

    Even open source proprietary modules are inconvenient

    In earlier days Xilinx was providing all of their primitives models as open source code (but under non-free license), so it was possible to use Free Software tools to simulate the design. But even then it was not so convenient for both our users and ourselves.

    It is not possible to distribute the proprietary code with the projects, so our users had to register with the FPGA manufacturer, download the multi-gigabyte software distribution and agree to the specific license terms before they were able to extract those primitives models missing from our project repository. The software license includes the requirement to install mandatory spyware that you give a permission to transfer your files to the manufacturer – this may be unacceptable for many of our users.

    It is also inconvenient for ourselves. The primitives models provided by the manufacturer sometimes have problems – either do not match the actual hardware or lack full compatibility with the simulator programs we use. In such cases we were providing patches that can be applied to the code provided by the manufacturer. If Xilinx kept them in a public Git repository, we could base our patches on particular tags or commits, but it is not the case and the manufacturer/software provider preserves the right to change the distributed files at any time without notice. So we have to update the patches to maintain the simulation working even we did not change a single line in the code.

    Encrippled modules are unacceptable

    When I started working on the FPGA design for Zynq I was surprised to notice that Xilinx abandoned a practice to provide the source code for the simulation models for the device primitives. The new versions of the older primitives (such as ISERDESE2.v and OSERDESE2.v instead of the previous ISERDESE1.v and OSERDESE1.v) now come in encrippled (crippled by encryption) form while they were open-sourced before. And it is likely this alarming tendency will continue – many proprietary vendors are hiding the source code just because they are not so proud about its quality and can not resist a temptation to encrypt it instead of removing the obsolete statements and updating the code to the modern standards.

    Such code is not just inconvenient, it is completely unacceptable for our design process. The first obvious reason is that it is not compatible with the most important development tool – a simulator. Xilinx provides decryption keys to trusted vendors of proprietary simulators and I do not have plans to abandon my choice of the tool just because the FPGA manufacturer prefers a different one.

    Personally I would not use any “black boxes” even if Icarus supported them – the nature of the FPGA design is already rather complex to spend any extra time of your life on guessing – why this “black box” behaves differently than expected. And all the “black boxes” and “wizards” are always limited and do not 100% match the real hardware. That is normal, when they cover most of the cases and you have the ability to peek inside when something goes wrong, so you can isolate the bug and (if it is actually a bug of the model – not your code) report it precisely and find the solution with the manufacturer support. Reporting problems in a form “my design does not work with your black box” is rather useless even when you provide all your code – it will be a difficult task for the support team to troubleshoot a mixture of your and their code – something you could do yourself better.

    So far we used two different solutions to handle encrypted modules. In one case when the older non-crippled model was available we just used the older version for the new hardware, the other one required complete re-implementation of the GTX serial transceiver model. The current code has many limitations even with its 3000+ lines of code, but it proved to be sufficient for the SATA controller development.

    Additional permission under GNU GPL version 3 section 7

    GNU General Public License Version 3 offers a tool to apply the license in a still “grey area” of the FPGA code. When we were using earlier GPLv2 for the FPGA projects we realized that it was more a statement of intentions than a binding license – FPGA bitstream as well as the simulation inevitably combined free and proprietary components. It was OK for us as the copyright holders, but would make it impossible for others to distribute their derivative projects in a GPL-compliant way. Version 3 has a Section 7 that can be used to give the permission for distribution of the derivative projects that depend on non-free components that are still needed to:

    1. generate a bitstream (equivalent to a software “binary”) file and
    2. simulate the design with Free Software tools

    The GPL requirement to provide other components under the same license terms when distributing the combined work remains in force – it is not possible to mix this code with any other non-free code. The following is our wording of the additional permission as included in every Verilog file header in Elphel FPGA projects.

    Additional permission under GNU GPL version 3 section 7:
    If you modify this Program, or any covered work, by linking or combining it
    with independent modules provided by the FPGA vendor only (this permission
    does not extend to any 3-rd party modules, "soft cores" or macros) under
    different license terms solely for the purpose of generating binary "bitstream"
    files and/or simulating the code, the copyright holders of this Program give
    you the right to distribute the covered work without those independent modules
    as long as the source code for them is available from the FPGA vendor free of
    charge, and there is no dependence on any encrypted modules for simulating of
    the combined code. This permission applies to you if the distributed code
    contains all the components and scripts required to completely simulate it
    with at least one of the Free Software programs.

    Available documentation for Xilinx FPGA primitives

    Xilinx has User Guides files available for download on their web site, some of the following links include release version and may change in the future. These files provide valuable information needed to re-implement the simulation models.

    • UG953 Vivado Design Suite 7 Series FPGA and Zynq-7000 All Programmable SoC Libraries Guide lists all the primitives, their I/O ports and attributes
    • UG474 7 Series FPGAs Configurable Logic Block has description of the CLB primitives
    • UG473 7 Series FPGAs Memory Resources has description for Block RAM modules, ports, attributes and operation of these modules
    • UG472 7 Series FPGAs Clocking Resources provides information for the clock buffering (BUF*) primitives and clock management tiles – MMCM and PLL primitives of the library
    • UG471 7 Series FPGAs SelectIO Resources covers advanced I/O primitives, including DCI, programmable I/O delays elements and serializers/deserializers, I/O FIFO elements
    • UG476 7 Series FPGAs GTX/GTH Transceivers is dedicated to the high speed serial transceivers. Simulation models for these modules are partially re-implemented for use in AHCI SATA Controller.

    by Andrey Filippov at March 18, 2016 10:42 PM

    March 15, 2016

    Elphel

    AHCI platform driver

    AHCI PLATFORM DRIVER

    In kernels prior to 2.6.x AHCI was only supported through PCI and hence required custom patches to support platform AHCI implementation. All modern kernels have SATA support as part of AHCI framework which significantly simplifies driver development. Platform drivers follow the standard driver model convention which is described in Documentation/driver-model/platform.txt in kernel source tree and provide methods called during discovery or enumeration in their platform_driver structure. This structure is used to register platform driver and is passed to module_platform_driver() helper macro which replaces module_init() and module_exit() functions. We redefined probe() and remove() methods of platform_driver in our driver to initialize/deinitialize resources defined in device tree and allocate/deallocate memory for driver specific structure. We also opted to resource-managed function devm_kzalloc() as it seems to be preferred way of resource allocation in modern drivers. The memory allocated with resource-managed function is associated with the device and will be freed automatically after driver is unloaded.

    HARDWARE LIMITATIONS

    As Andrey has already pointed out in his post, current implementation of AHCI controller has several limitations and our platform driver is affected by two of them.
    First, there is a deviation from AHCI specification which should be considered during platform driver implementation. The specification defines that host bus adapter uses system memory for the Command List Structure, Received FIS Structure and Command Tables. The common approach in platform drivers is to allocate a block of system memory with single dmam_alloc_coherent() call, set pointers to different structures inside this block and store these pointers in port specific structure ahci_port_priv. The first two of these structures in x393_sata are stored in the FPGA RAM blocks and mapped to register memory as it was easier to make them this way. Thus we need to allocate a block of system memory for Command Tables only and set other pointers to predefined addresses.
    Second, and the most significant one from the driver’s point of view, proved to be single command slot implemented. Low level drivers assume that all 32 slots in Command List Structure are implemented and explicitly use the last slot for internal commands in ata_exec_internal_sg() function as shown in the following code snippet:
    struct ata_queued_cmd *qc;
    unsigned int tag, preempted_tag;
     
    if (ap->ops->error_handler)
        tag = ATA_TAG_INTERNAL;
    else
        tag = 0;
    qc = __ata_qc_from_tag(ap, tag);

    ATA_TAG_INTERNAL is defined in libata.h and reserved for internal commands. We wanted to keep all the code of our driver in our own sources and make as fewer changes to existing Linux drivers as possible to simplify further development and upgrade to newer kernels. So we decided that substitution of the command tag in our own code which handles command preparation would be the easiest way of fixing this issue.

    DRIVER STRUCTURES

    Proper platform driver initialization requires that several structures to be prepared and passed to platform functions during driver probing. One of them is scsi_host_template and it serves as a direct interface between middle level drivers and low level drivers. Most AHCI drivers use default AHCI_SHT macro to fill the structure with predefined values. This structure contains a field called .can_queue which is of particular interest for us. The .can_queue field sets the maximum number of simultaneous commands the host bus adapter can accept and this is the way to tell middle level drivers that our controller has only one command slot. The scsi_host_template structure was redefined in our driver as follows:
    static struct scsi_host_template ahci_platform_sht = {
        AHCI_SHT(DRV_NAME),
        .can_queue = 1,
        .sg_tablesize = AHCI_MAX_SG,
        .dma_boundary = AHCI_DMA_BOUNDARY,
        .shost_attrs = ahci_shost_attrs,
        .sdev_attrs = ahci_sdev_attrs,
    };

    Unfortunately, ATA layer driver does not take into consideration the value we set in this template and uses hard coded tag value for its internal commands as I pointed out earlier, so we had to fix this in command preparation handler.
    ata_port_operations is another important driver structure as it controls how the low level driver interfaces with upper layers. This structure is defined as follows:
    static struct ata_port_operations ahci_elphel_ops = {
        .inherits = &ahci_ops,
        .port_start = elphel_port_start,
        .qc_prep = elphel_qc_prep,
    };

    The port start and command preparation handlers were redefined to add some implementation specific code. .port_start is used to allocate memory for Command Table and set pointers to Command List Structure and Received FIS Structure. We decided to use streaming DMA mapping instead of coherent DMA mapping used in generic AHCI driver as explained in Andrey’s article. .qc_prep is used to change the tag of current command and organize proper access to DMA mapped buffer.

    PERFORMANCE CONSIDERATIONS

    We used debug code in the driver along with profiling code in the controller to estimate overall performance and found out that upper driver layers introduce significant delays in command execution sequence. The delay between last DMA transaction in a sequence of transactions and next command could be as high as 2 ms. There are various sources of overhead which could lead to delays, for instance, file system operations and context switches in the operating system. We will try to use read/write operations on a raw device to improve performance.

    LINKS

    AHCI/SATA stack under GNU GPL
    GitHub: AHCI driver source code

    by Mikhail Karpenko at March 15, 2016 02:16 AM

    March 14, 2016

    Harald Welte

    Open Source mobile communications, security research and contributions

    While preparing my presentation for the Troopers 2016 TelcoSecDay I was thinking once again about the importance of having FOSS implementations of cellular protocol stacks, interfaces and network elements in order to enable security researches (aka Hackers) to work on improving security in mobile communications.

    From the very beginning, this was the motivation of creating OpenBSC and OsmocomBB: To enable more research in this area, to make it at least in some ways easier to work in this field. To close a little bit of the massive gap on how easy it is to do applied security research (aka hacking) in the TCP/IP/Internet world vs. the cellular world.

    We have definitely succeeded in that. Many people have successfully the various Osmocom projects in order to do cellular security research, and I'm very happy about that.

    However, there is a back-side to that, which I'm less happy about. In those past eight years, we have not managed to attract significant amount of contributions to the Osmocom projects from those people that benefit most from it: Neither from those very security researchers that use it in the first place, nor from the Telecom industry as a whole.

    I can understand that the large telecom equipment suppliers may think that FOSS implementations are somewhat a competition and thus might not be particularly enthusiastic about contributing. However, the story for the cellular operators and the IT security crowd is definitely quite different. They should have no good reason not to contribute.

    So as a result of that, we still have a relatively small amount of people contributing to Osmocom projects, which is a pity. They can currently be divided into two groups:

    • the enthusiasts: People contributing because they are enthusiastic about cellular protocols and technologies.
    • the commercial users, who operate 2G/2.5G networks based on the Osmocom protocol stack and who either contribute directly or fund development work at sysmocom. They typically operate small/private networks, so if they want data, they simply use Wifi. There's thus not a big interest or need in 3G or 4G technologies.

    On the other hand, the security folks would love to have 3G and 4G implementations that they could use to talk to either mobile devices over a radio interface, or towards the wired infrastructure components in the radio access and core networks. But we don't see significant contributions from that sphere, and I wonder why that is.

    At least that part of the IT security industry that I know typically works with very comfortable budgets and profit rates, and investing in better infrastructure/tools is not charity anyway, but an actual investment into working more efficiently and/or extending the possible scope of related pen-testing or audits.

    So it seems we might want to think what we could do in order to motivate such interested potential users of FOSS 3G/4G to contribute to it by either writing code or funding associated developments...

    If you have any thoughts on that, feel free to share them with me by e-mail to laforge@gnumonks.org.

    by Harald Welte at March 14, 2016 11:00 PM

    TelcoSecDay 2016: Open Source Network Elements for Security Analysis of Mobile Networks

    Today I had the pleasure of presenting about Open Source Network Elements for Security Analysis of Mobile Networks at the Troopers 2016 TelcoSecDay.

    The main topics addressed by this presentation are:

    • Importance of Free and Open Source Software implementations of cellular network protocol stacks / interfaces / network elements for applied telecom security research
    • The progress we've made at Osmocom over the last eight years.
    • An overview about our current efforts to implement at 3G Network similar to the existing 2G/2.5G/2.75G implementations.

    There are no audio or video recordings of this session.

    Slides are available at http://git.gnumonks.org/index.html/laforge-slides/plain/2016/telcosecday/foss-gsm.html

    by Harald Welte at March 14, 2016 11:00 PM

    March 13, 2016

    Bunnie Studios

    Preparing for Production of The Essential Guide To Electronics in Shenzhen

    The crowd funding campaign for The Essential Guide to Electronics in Shenzhen is about to wrap up in a couple of days.

    I’ve already started the process of preparing the printing factory for production. Last week, I made another visit to the facility, to discuss production forecasts, lead time and review the latest iteration of the book’s prototype. It’s getting pretty close. I’m now using a heavy, laminated cardstock for the tabbed section dividers to improve their durability. The improved tabs pushes up the cost of the book, and more significantly, pushes the shipping weight of the book over 16 oz, which means I’m now paying a higher rate for postage. However, this is mostly offset by the higher print volume, so I can mitigate the unexpected extra costs.

    The printing factory has a lot of mesmerizing machines running on the floor, like this automatic cover binder for perfect-bound books:

    And this high speed two-color printing press:

    This is probably the very press that the book will be printed on. The paper moves so fast that it’s just a blur as an animated gif. I estimate it does about 150 pages per minute, and each page is about a meter across, which gives it an effective throughput of over a thousand book-sized pages per minute. Even for a run of a couple thousand books, this machine would only print for about 15 minutes before it has to stop for a printing plate swap, an operation which takes a few minutes to complete. This explains why books don’t get really cheap until the volume reaches tens of thousands of copies.

    Above is the holepunch used for building prototypes of ring-bound books. The production punch is done using a semi-automated high-volume die-cutter, but for the test prints, this is the machine used to punch out the holes.

    Sorry, your browser does not support the video tag.

    The ring binding itself is done by a fairly simple machine. The video above shows the process used to adjust the machine’s height for a single shot on the prototype book. In a production scenario, there would be a few workers on the table to the left of the binding machine aligning pages, adding the covers, and inserting the ring stock. Contrast this to the fully automated perfect binding machine shown at the top of this post — ring binding is a much more expensive binding style in this factory, since they haven’t automated the process (yet).

    I also got a chance to see the machine that gilds and debosses the book cover. It’s a bit of a different process than the edge-gilding I described in the previous post about designing the cover.

    Here, an aluminum plate is first made with the deboss pattern. It looks pretty neat — I’ve half a mind to ask the laoban if he’d save the used plates for me to keep as a souvenir, although the last thing I need in my tiny flat in Singapore is more junk.

    The plate is then glued into a huge press. This versatile machine can do debossing, die cutting, and gilding on sheets of paper as large as A0. For the gilding operation, the mounting face for the aluminum plate is heated to around 130 degrees Celsius.

    I think it’s kind of cute how they put good luck seals all over the machines. The characters say “kai gong da ji” which literally translated means “start operation, big luck”. I don’t know what’s the underlying reason — maybe it’s to wish good luck on the machine, the factory, or the operator; or maybe fix its feng shui, or some kind of voodoo to keep the darned thing from breaking down again. I’ll have to remember to ask what’s the reason for the sticker next time I visit.

    Once at temperature, the gilding foil is drawn over the plate, and the alignment of the plate is determined by doing a test shot onto a transparent plastic sheet. The blank cover is then slid under the sheet, taped in place, and the clear sheet removed.

    The actual pressing step is very fast — so fast I didn’t have a chance to turn my camera into video mode, so I only have a series of three photos to show the before, pressing, and after states.

    And here’s a photo of me with the factory laoban (boss), showing off the latest prototype. I’ve often said that if you can’t meet the laoban, the factory’s too big for you. Having a direct relationship with the laoban has been helpful for this project; he’s very patiently addressed all my strange customization requests, and as a side bonus he seems to know all the good restaurants in the area so the after-work meals are usually pretty delicious.

    I’m looking forward to getting production started on the book, and getting all the pledge rewards delivered on-time. Now’s the last chance to back the crowd funding campaign and get the book at a discounted price. I will order some extra copies of the book, but it’s been hard to estimate demand, so there’s a risk the book could sell out soon after the campaign concludes.

    by bunnie at March 13, 2016 02:11 PM

    March 12, 2016

    Elphel

    AHCI/SATA stack under GNU GPL

    Implementation includes AHCI SATA host adapter in Verilog under GNU GPLv3+ and a software driver for GNU/Linux running on Xilinx Zynq. Complete project is simulated with Icarus Verilog, no encrypted modules are required.

    This concludes the last major FPGA development step in our race against finished camera parts and boards already arriving to Elphel facility before the NC393 can be shipped to our customers.

    Fig. 1. AHCI Host Adapter block diagram

    Fig. 1. AHCI Host Adapter block diagram


    Why did we need SATA?

    Elphel cameras started as network cameras – devices attached to and controlled over the Ethernet, the previous generations used 100Mbps connection (limited by the SoC hardware), and NC393 uses GigE. But this bandwidth is still not sufficient as many camera applications require high image quality (compared to “raw”) without compression artifacts that are always present (even if not noticeable by the human viewer) with the video codecs. Recording video/images to some storage media is definitely an option and we used it in the older camera too, but the SoC IDE controller limited the recording speed to just 16MB/s. It was about twice more than the 100Mb/s network, but still was a bottleneck for the system in many cases. The NC393 can generate 12 times the pixel rate (4 simultaneous channels instead of a single one, each running 3 times faster) of the NC353 so we need 200MB/s recording speed to keep the same compression quality at the increased maximal frame rate, higher recording rate that the modern SSD are capable of is very desirable.

    Fig.2. SATA routing

    Fig.2. SATA routing: a) Camera records data to the internal SSD; b) Host computer connects directly to the internal SSD; c) Camera records to the external mass storage device

    The most universal ways to attach mass storage device to the camera would be USB, SATA and PCIe. USB-2 is too slow, USB-3 is not available in Xilinx Zynq that we use. So what remains are SATA and PCIe. Both interfaces are possible to implement in Zynq, but PCIe (being faster as it uses multiple lanes) is good for the internal storage while SATA (in the form of eSATA) can be used to connect external storage devices too. We may consider adding PCIe capability to boost recording speed, but for initial implementation the SATA seems to be more universal, especially when using a trick we tested in Eyesis series of cameras for fast unloading of the recorded data.

    Routing SATA in the camera

    It is a solution similar to USB On-The-Go (similar term for SATA is used for unrelated devices), where the same connector is used to interface a smartphone to the host PC (PC is a host, a smartphone – a device) and to connect a keyboard or other device when a phone becomes a host. In contrast to the USB cables the eSATA ones always had identical connectors on both ends so nothing prevented to physically link two computers or two external drives together. As eSATA does not carry power it is safe to do, but nothing will work – two computers will not talk to each other and the storage devices will not be able to copy data between them. One of the reasons is that two signal pairs in SATA cable are uni-directional – pair A is output for the host and input for device, pair B – the opposite.

    Camera uses Vitesse (now Microsemi) VSC3304 crosspoint switch (Eyesis uses larger VSC3312) that has a very useful feature – it has reversible I/O ports, so the same physical pins can be configured as inputs or outputs, making it possible to use a single eSATA connector in both host and device mode. Additionally VSC3304 allows to change the output signal level (eSATA requires higher swing than the internal SATA) and perform analog signal correction on both inputs and outputs facilitating maintaining signal integrity between attached SATA devices.

    Aren’t SATA implementations for Xilinx Zynq already available?

    Yes and no. When starting the NC393 development I contacted Ashwin Mendon who already had SATA-2 working on Xilinx Virtex. The code is available on OpenCores under GNU GPL license. There is an article published by IEEE . The article turned out to be very useful for our work, but the code itself had to be mostly re-written – it was still for different hardware and were not able to simulate the core as it depends on Xilinx proprietary encrypted primitives – a feature not compatible with the free software simulators we use.

    Other implementations we could find (including complete commercial solution for Xilinx Zynq) have licenses not compatible with the GNU GPLv3+, and as the FPGA code is “compiled” to a single “binary” (bitstream file) it is not possible to mix free and proprietary code in the same design.

    Implementation

    The SATA host adapter is implemented for Elphel NC393 camera, 10393 system board documentation is on our wiki page. The Verilog code is hosted at GitHub, the GNU/Linux driver ahci_elphel.c is also there (it is the only hardware-specific driver file required). The repository contains a complete setup for simulation with Icarus Verilog and synthesis/implementation with Xilinx tools as a VDT (plugin for Eclipse IDE) project.

    Current limitations

    The current project was designed to be a minimal useful implementation with provisions to future enhancements. Here is the list of what is not yet done:

    • It is only SATA2 (3GHz) while the hardware is SATA3(6GHz) capable. We will definitely work on the SATA3 after we will complete migration to the new camera platform. Most of the project modules are already designed for the higher data rate.
    • No scrambling of outgoing primitives, only recognizing incoming ones. Generation of CONTp is optional by SATA standard, but we will definitely add this as it reduces EMI and we already implemented multiple hardware measures in this direction. Most likely we will need it for the CE certification.
    • No FIS-based switching for port multipliers.
    • Single command slot, and no NCQ. This functionality is optional in AHCI, but it will be added – not much is missing in the current design.
    • No power management. We will look for the best way to handle it as some of the hardware control (like DevSleep) requires i2c communication with the interface board, not directly under FPGA control. Same with the crosspoint switch.

    There is also a deviation from the AHCI standard that I first considered temporary but now think it will stay this way. AHCI specifies that a Command list structure (array of 32 8-DWORD command headers) and a 256-byte Received FIS structure are stored in the system memory. On the other hand these structures need non-paged memory, are rather small and require access from both CPU and the hardware. In x393_sata these structures are mapped to the register memory (stored in the FPGA RAM blocks) – not to the regular system memory. When working with the AHCI driver we noticed that it is even simpler to do it that way. The command tables themselves that involve more data passing form the software to device (especially PRDT – physical region descriptor tables generated from the scatter-gather lists of allocated data memory) are stored in the system memory as required and are read to the hardware by the DMA engine of the controller.

    As of today the code is still not yet cleaned up from temporary debug additions. It will all be done in the next couple weeks as we need to combine this code with the large camera-specific code – SATA controller (~6% of the FPGA resources) was developed separately from the rest of the code (~80% resources) as it makes both simulation and synthesis iterations much faster.

    Extras

    This implementation includes some additions functionality controlled by Verilog `ifdef directives. Two full block RAM primitives as used for capturing data in the controller. One of these “datascopes” captures incoming data right after 10b/8b decoder – it can store either 1024 samples of the incoming data combined of 16 bit of data plus attributes or the compact form when each 32-bit primitive is decoded and the result is a 5-bit primitive/error number. In that case 6*1024 primitives are recorded – 3 times longer than the longest FIS.

    Another 4KB memory block is used for profiling – the controller timestamps and records first 5 DWORDs of each each incoming and outgoing FIS, additionally it timestamps software writes to the specific location allowing mixed software/hardware profiling.

    This project implements run-time access to the primitive attributes using Xilinx DRP port of the GTX elements, same interface is used to programmatically change the logical values of the configuration inputs, making it significantly simpler to guess how the partially documented attributes change the device functionality. We will definitely need it when upgrading to SATA3.

    Code description

    Top connections

    The controller uses 3 differential I/O pads of the device – one input pair (RX on Fig.1) and one output pair (TX) make up a SATA port, additional dedicated input pair (CLK) provides 150MHz that synchronizes most of the controller and the transmit channel of the Zynq GTX module. In the 10393 board uses SI53338 spread-spectrum capable programmable clock to drive this input.

    Xilinx conventions tell that the top level module should instantiate the SoC Processing System PS7 (I would consider connections to the PS7 as I/O ports), so the top module does exactly that and connects to AXI ports of the actual design top module to the MAXIGP1 and SAXIHP3 ports of the PS7, IRQF2P[0] provides interrupt signal to the CPU. MAXIGP1 is one of the two 32-bit AXI ports where CPU is master – it is used for PIO access to the controller register memory (and read out debug information), SAXIHP3 is one of the 4 “high performance” 64-bit wide paths, this port is used by the controller DMA engine to transfer command tables and data to/from the device. The port numbers are selected to match ones unused in the camera specific code, other designs may have different assignments.

    Clocks and clock domains

    Current SATA2 implementation uses 4 different clock domains, some may be shared with other unrelated modules or have the same source.

    1. aclk is used in MAXIGP1 channel and part of the MAXI REGISTERS module synchronizing AXI-pointing port of the dual-port block RAM that implements controller registers. 150 MHz (maximal permitted frequency) is used, it is generated from one of the PS7 FPGA clocks
    2. hclk is used in AXI HP3 channel, DMA Control and parts of the H2D CCD FIFO (host-to-device cross clock domain FIFO ), D2H CCD FIFO and AFI ABORT modules synchronizing. 150 MHz (maximal permitted frequency) is used, same as the aclk
    3. mclk is used throughout most of the other modules of the controller except parts of the GTX, COMMA, 10b8 and input parts of the ELASTIC. For current SATA2 implementation it is 75MHz, this clock is derived from the external clock input and is not synchronous with the first two
    4. xclk – source-synchronous clock extracted from the incoming SATA data. It drives COMMA and 10b8 modules, ELASTIC allows data to cross clock boundaries by adding/removing ALIGNp primitives

    ahci_sata_layers

    The two lower layers of the stack (phy and link) that are independent of the controller system interface (AHCI) are instantiated in ahci_sata_layers.v module together with the 2 FIFO buffers for D2H (incoming) and H2D outgoing data.

    SATA PHY

    SATA PHY layer Contains the OOB (Out Of Band) state machine responsible for handling COMRESET,COMINIT and COMWAKE signals, the rest is just a wrapper for the functionality of the Xilinx GTX transceiver. This device includes both high-speed elements and some blocks that can be synthesized using FPGA fabric. Xilinx does not provide the source code for the GTX simulation module and we were not able to match the hardware operation to the documentation, so in the current design we use only those parts of the GTXE2_CHANNEL primitive that can not be replaced by the fabric. Other modules are implemented as regular Verilog code included in the x393_sata project. There is a gtx_wrap module in the design that has the same input/output ports as the primitive allowing to select which features are handled by the primitive and which – by the Verilog code without changing the rest of the design.
    The GTX primitive itself can not be simulated with the tools we use, so the simulation module was replaced, and Verilog `ifdef directive switches between the simulation model and non-free primitive for synthesis. The same approach we used earlier with other Xilinx proprietary primitives.

    Link

    Link module implements SATA link state machine, scrambling/descrambling of the data, calculates CRC for transmitted data and verifies CRC for the received one. SATA does not transmit and receive data simultaneously (only control primitives), so both CRC and scrambler modules have a single instance each providing dual functionality. This module required most troubleshooting and modifications during testing the hardware with different SSD – at some stages controller worked with some of them, but not with others.

    ahci_top

    Other modules of the design are included in the ahci_top. Of them the largest is the DMA engine shown as a separate block on the Fig.1.

    DMA

    DMA engine makes use of one of the Zynq 64-bit AXI HP ports. This channel includes FIFO buffers on the data and address subchannels (4 total) – that makes interfacing rather simple. The hard task is resetting the channels after failed communication of the controller with the device – even reloading bitsteam and resetting the FPGA would not help (actually it makes things even worse). I searched Xilinx support forum and found that similar questions where only discussed between the users, there was no authoritative recommendation from Xilinx staff. I added axi_hp_abort module that watches over the I/O transactions and keeps track of what was sent to the FIFO buffers, being able to complete transactions and drain buffers when requested.

    The DMA module reads command table, saves command data in the memory block to be later read by the FIS TRANSMIT module, it then reads the scatter-gather memory descriptors (PRDT) (supporting pre-fetch if enabled) and reads/writes the data itself combining the fragments.

    On the controller side data that comes out towards the device (H2D CCD FIFO) and coming from device(D2H CCD FIFO) needs to cross the clock boundary between hclk and mclk, and handle alignment issues. AXI HP operates in 64-bit mode, data to/from the link layer is 32-bit wide and AHCI allows alignment to the even number of bytes (16bits). When reading from the device the cross-clock domain FIFO module does it in a single step, combining 32-bit incoming DWORDs into 64-bit ones and using a barrel shifter (with 16-bit granularity) to align data to the 64-bit memory QWORDs – the AXI HP channel provides per-byte write mask that makes it rather easy. The H2D data is converted in 2 steps: First it crosses the clock domain boundary being simultaneously transformed to 32-bit with a 2-bit word mask that tells which of the two words in each DWORD are valid. Additional module WORD STUFFER operates in mclk domain and consolidates incoming sparse DWORDs into full outgoing DWORDs to be sent to the link layer.

    AHCI

    The rest of the ahci_top module is shown as AHCI block. AHCI standard specifies multiple registers and register groups that HBA has. It is intended to be used for PCI devices, but the same registers can be used even when no PCI bus is physically present. The base address is programmed differently, but the relative register addressing is still the same.

    MAXI REGISTERS

    MAXI REGISTERS module provides the register functionality and allows data to cross the clock domain boundary. The register memory is made of a dual-port block RAM module, additional block RAM (used as ROM) is pre-initialized to make each bit field of the register bank RW (read/write), RO (read only), RWC (read, write 1 to clear) or RW1 (read, write 1 to set) as specified by the AHCI. Such initialization is handled by the Python program create_ahci_registers.py that also generates ahci_localparams.vh include file that provides symbolic names for addressing register fields in Verilog code of other modules and in simulation test benches. The same file runs in the camera to allow access to the hardware registers by names.

    Each write access to the register space generates write event that crosses the clock boundary and reaches the HBA logic, it is also used to start the AHCI FSM even if it is in reset state.

    The second port of the register memory operates in mclk domain and allows register reads and writes by other AHCI submodules (FIS RECEIVE – writes registers, FIS TRANSMIT and CONTROL STATUS)

    The same module also provides access to debug registers and allows reading of the “datascope” acquired data.

    CONTROL STATUS

    The control/status module maintains “live” registers/bits that the controller need to react when they are changed by the software and react on various events in the different parts of the controller. The updated register values are written to the software accessible register bank.

    This module generates interrupt request to the processor as specified in the AHCI standard. It uses one of the interrupt lines from the FPGA to the CPU (IRQF2P) available in Zynq.

    AHCI FSM

    The AHCI state machine implements the AHCI layer using programmable sequencer. Each state traverses the following two stages: actions and conditions. The first stage triggers single-cycle pulses that are distributed to appropriate modules (currently 52 total). Some actions require just one cycle, others wait for “done” response from the destination. Conditions phase involves freezing logical conditions (now 44 total) and then going through them in the order specified in AHCI documentation. The state description for the machine is provided in the Assembler-like format inside the Python program ahci_fsm_sequence.py it generates Verilog code for the action_decoder.v and condition_mux.v modules that are instantiated in the ahci_fsm.v.

    The output listing of the FSM generator is saved to ahci_fsm_sequence.lst. Debug output registers include address of the last FSM transition, so this listing can be used to locate problems during hardware testing. It is possible to update the generated FSM sequence at run time using designated as vendor-specific registers in the controller I/O space.

    FIS RECEIVE

    The FIS RECEIVE module processes incoming FIS (DMA Setup FIS, PIO Setup FIS, D2H register FIS, Set device bits FIS, unknown FIS), updates required registers and saves them in the appropriate areas of received FIS structure. For incoming data FIS it consumes just the header DWORD and redirects the rest to the D2H CCD FIFO of the DMA module. This module also implements the word counters (PRD byte count and decrementing transfer counter), these counters are shared with the transmit channel.

    FIS TRANSMIT

    FIS TRANSMIT module recognizes the following commands received from the AHCI FSM: fetch_cmd, cfis_xmit, atapi_xmit and dx_xmit, following the prefetch condition bit. The first command (fetch_cmd) requests DMA engine to read in the command table and optionally to prefetch PRD memory descriptors. The command data is read from the DMA module memory after one of the cfis_xmit or atapi_xmit comamnds, it is then transmitted to the link layer to be sent to device. When processing the dx_xmit this module sends just the header DWORD and transfers control to the DMA engine, continuing to count PRD byte count and decrementing transfer counter.

    FPGA resources used

    According to the “report_utilization” Xilinx Vivado command, current design uses:

    • 1358 (6.91%) slices
    • 9.5 (3.58%) Block RAM tiles
    • 7 (21.88%) BUFGCTRL
    • 2 (40%) PLLE2_ADV

    The resource usage will be reduced as there are debug features not yet disabled. One of the PLLE2_ADV uses clock already available in the rest of the x393 code (150MHz for MAXIGP1 and SXAHIHP3), the other PLL that produces 75MHz transmit-synchronous clock can probably be eliminated too. Two of the block RAM tiles are capturing incoming primitives and profiling data, this functionality is not needed in the production version. More the resources may be saved if we’ll be able to use the hard-wired 10b/8b decoder, 8b/10b encoder, comma alignment and elastic buffer primitives of the Xilinx GTXE2_CHANNEL.

    Update: eliminated use of the PLLE2_ADV in the SATA controller (one left is just to generate AXI clock, it is not needed with proper setting of the PS output clock), reduced number of slices (datascope functionality preserved) to 1304 (6.64%). PLLs are valuable resource for multi-sensor camera as we keep possibility to use different sensors/clocks on each sensor port.

    Testing the hardware

    Testing with Python programs

    All the initial work with the actual hardware was done with the Python script that started with reimplementation of the same functionality used when simulating the project. Most is in x393sata.py that imports x393_vsc3304.py to control the VSC3304 crosspoint switch. This option turned out very useful for troubleshooting starting from initial testing of the SSD connection (switch can route the SSD to the desktop computer), then for verifying the OOB exchange (the only what is visible on my oscilloscope) – switch was set to connect SSD to Zynq, and use eSATA connector pins to duplicate signals between devices, so probing did not change the electrical characteristics of the active lines. Python program allowed to detect communication errors, modify GTX attributes over DRP, capture incoming data to reproduce similar conditions with the simulator. Step-by-step it was possible to receive signature FIS, then get then run the identify command. In these tests I used large area of the system memory that was reserved as a video ring buffer set up as “coherent” DMA memory. We were not able to make it really “coherent” – the command data transmitted to the device (controller reads it from the system memory as a master) often contained just zeros as the real data written by the CPU got stuck in either one of the caches or in the DDR memory controller write buffer. These errors only went away when we abandoned the use of the coherent memory allocation and switched to the stream DMA with explicit synchronization with dma_sync_*_for_cpu/dma_sync_*_for_device.

    AHCI driver for GNU/Linux

    Mikhail Karpenko is preparing post about the software driver, and as expected this development stage revealed new controller errors that were not detected with just manual launching commands through the Python program. When we mounted the SSD and started to copy gigabyte files, the controller reported some fake CRC errors. And it happened with one SSD, but not with the other. Using data capturing modules it was not so difficult to catch the conditions that caused errors and then reproduce them with the simulator – one of the last bugs detected was that link layer incorrectly handled single incoming HOLD primitives (rather unlikely condition).

    Performance results

    First performance testing turned out to be rather discouraging – ‘dd’ reported under 100 MB/s rate. At that point I added profiling code to the controller, and the data rate for the raw transfers (I tried command that involved reading of 24 of the 8KB FISes), measured from the sending of the command FIS to the receiving of the D2H register FIS confirming the transfer was 198MB/s – about 80% of the maximal for the SATA2. Profiling the higher levels of the software we noticed that there is virtually no overlap between the hardware and software operation. It is definitely possible to improve the result, but the fact that the software slowed twice the operation tells that it if the requests and their processing were done in parallel, it will consume 100% of the CPU power. Yes, there are two cores and the clock frequency can be increased (the current boards use the speed grade 2 Zynq, while the software still thinks it is speed grade 1 for compatibility with the first prototype), it still may be a big waste in the camera. So we will likely bypass the file system for sequential recording video/images and use the second partition of the SSD for raw recording, especially as we will record directly from the video buffer of the system memory, so no dealing with scatter-gather descriptors, and no need to synchronize system memory as no cache is involved. The memory controller is documented as being self-coherent, so reading the same memory while it is being written to through a different channel should cause write operation to be performed first.

    Conclusions and future plans

    We’ve achieved the useful functionality of the camera SATA controller allowing recording to the internal high capacity m.2 SSD, so all the hardware is tested and cameras can be shipped to the users. The future upgrades (including SATA3) will be released in the same way as other camera software. On the software side we will first need to upgrade our camogm recorder to reduce CPU usage during recording and provide 100% load to the SATA controller (rather easy when recording continuous memory buffer). Later (it will be more important after SATA3 implementation) we may optimize controller even more try to short-cut the video compressors outputs directly to the SATA controller, using the system memory as a buffer only when the SSD is not ready to receive data (they do take “timeouts”).

    We hope that this project will be useful for other developers who are interested in Free Software solutions and prefer the Real Verilog Code (RVC) to all those “wizards”, “black boxes” and “IP”.

    Software tools used (and not)

    Elphel designs and builds high performance cameras striving to provide our users/developers with the design freedom at every possible level. We do not use any binary-only modules or other hidden information in our designs – all what we know ourselves is posted online – usually on GitHub and Elphel Wiki. When developing FPGA, and that unfortunately still depends on proprietary tools, we limit ourselves to use only free for download tools to be exactly in the same position as many of our users. We can not make it necessary for the users (and consider it immoral) to purchase expensive tools to be able to modify the free software code for the hardware they purchased from Elphel, so no “Chipscopes” or other fancy proprietary tools were used in this project development.

    Keeping information free is a precondition, but it is not sufficient alone for many users to be able to effectively develop new functionality to the products, there needs to be ease of doing that. In the area of the FPGA design (and it is a very powerful tool resulting in high performance that is not possible with just software applications) we think of our users as smart people, but not necessarily professional FPGA developers. Like ourselves.

    Fig.3 FPGA development with VDT

    Fig.3 FPGA development with VDT

    We learned a lesson from our previous FPGA projects that depended too much on particular releases of Xilinx tools and were difficult to maintain even for ourselves. Our current code is easier to use, port and support, we tried to minimize dependence on particular tools used what we think is a better development environment. I believe that the “Lego blocks” style is not the the most productive way to develop the FPGA projects, and it is definitely not the only one possible.

    Treating HDL code similar to the software one is not less powerful paradigm, and to my opinion the development tools should not pretend to be “wizards” who know better than me what I am allowed (or not allowed) to do, but more like gentle secretaries or helpers who can take over much of routine work, remind about important events and provide some appropriate suggestions (when asked for). Such behavior is even more important if the particular activity is not the only one you do and you may come back to it after a long break. A good IDE should be like that – help you to navigate the code, catch problems early, be useful with default settings but provide capabilities to fine tune the functionality according to the personal preferences. It is also important to provide familiar environment, this is why we use the same Eclipse IDE for Verilog, Python, C/C++ and Java and more. All our projects come with the initial project settings files that can be imported in this IDE (supplemented by the appropriate plugins) so you can immediately start development from the point we currently left it.

    For FPGA development Elphel provides VDT – a powerful tool that includes deep Verilog support and integrates free software simulator Icarus Verilog with the Github repository and a popular GTKWave for visualizing simulation results. It comes with the precofigured support of FPGA vendors proprietary synthesis and implementation tools and allows addition of other tools without requirement to modify the plugin code. The SATA project uses Xilinx Vivado command line tools (not Vivado GUI), support for several other FPGA tools is also available.

    by Andrey Filippov at March 12, 2016 11:14 PM

    ZeptoBARS

    GD32F103CBT6 - Cortex-M3 with serial flash : weekend die-shot

    Giga Devices GD32F103CBT6 really surprised us:



    Giga Devices was a serial flash manufacturer for quite some time. When they launched their ARM Cortex M3 lineup (with some level of binary compatibility to STM32) - instead of going conventional route of making numerous dies with different flash and SRAM sizes they went for SRAM&logic die and separate serial flash die. How this could work fast enough? Keep reading :-) At least ESP8266 already taught us that executing code from serial flash and reaching acceptable speed is not impossible.

    Use of serial flash allows Giga Devices to increase maximum flash size in their microcontrollers quite a bit (currently they have up to 3MiB) and to save quite a bit on ARM licensing fees (if they are paying "per die design").


    Die has 110 pads, 9 of which are used by a flash die. GD32F103CBT6 is in TQFP48 package - which again suggests that this die is universal and also used in higher pin count models. Die size 2889x3039 µm.

    Logo:


    ADC capacitor bank:


    After etching to poly level we clearly see that there is no flash on the die:


    SRAM sizes are 32KiB in each largest block (128 KiB total) - stores code, which means first 128KiB could be accessed faster than typical flash. GD32 chips with 20Kb of SRAM or less have no more than 128KiB of flash, so all flash content is served from SRAM. This might also mean that startup time is slower than one would expect. With this SRAM mirroring it is not surprising that GD32 is beating STM32 in performance even on the same frequency and loosing in idle & sleep power consumption. Consumption at full load is lower than STM32 due to better (smaller) manufacturing technology.

    2 smaller blocks are 10KiB each and are likely to be user-accessible SRAM.
    4 smallest blocks closest to the synthesized logic are 512B each.

    SRAM has cell size 2.04 µm², which is ~110nm. Scale 1px = 57nm:


    Standard cells:


    Low power standard cells:


    Flash die:

    Flash die size: 1565x1378 µm.

    PS. Thanks for the chips go to dongs from irc.

    March 12, 2016 12:00 PM

    March 08, 2016

    Harald Welte

    Linaro Connect BKK16 Keynote on GPL Compliance

    Today I had the pleasure of co-presenting with Shane Coughlan the Linaro Connect BKK16 Keynote on GPL compliance about GPL compliance.

    The main topics addressed by this presentation are:

    • Brief history about GPL enforcement and how it has impacted the industry
    • Ultimate Goal of GPL enforcement is compliance
    • The license is not an end in itself, but rather to facilitate collaborative development
    • GPL compliance should be more engineering and business driven, not so much legal (compliance) driven.

    The video recording is available at https://www.youtube.com/watch?v=b4Bli8h0V-Q

    Slides are available at http://git.gnumonks.org/index.html/laforge-slides/plain/2016/linaroconnect/compliance.html

    The video of a corresponding interview is available from https://www.youtube.com/watch?v=I6IgjCyO-iQ

    by Harald Welte at March 08, 2016 11:00 PM

    March 03, 2016

    Free Electrons

    Free Electrons at the Embedded Linux Conference 2016

    Like every year for about 10 years, the entire Free Electrons engineering team will participate to the next Embedded Linux Conference, taking place on April 4-6 in San Diego, California. For us, participating to such conferences is very important, as it allows to remain up to date with the latest developments in the embedded Linux world, create contacts with other members of the embedded Linux community, and meet the community members we already know and work with on a daily basis via the mailing lists or IRC.

    Embedded Linux Conference 2016

    Over the years, our engineering team has grown, and with the arrival of two more engineers on March 14, our engineering team now gathers 9 persons, all of whom are going to participate to the Embedded Linux Conference.

    As usual, in addition to attending, we also proposed a number of talks, and some of them have been accepted and are visible in the conference schedule:

    As usual, our talks are centered around our areas of expertise: hardware support in the Linux kernel, especially for ARM platforms, and build system related topics (Buildroot, Yocto, autotools).

    We are looking forward to attending this event, and see many other talks from various speakers: the proposed schedule contains a wide range of topics, many of which look really interesting!

    by Thomas Petazzoni at March 03, 2016 01:49 PM

    February 24, 2016

    Harald Welte

    Report from the VMware GPL court hearing

    Today, I took some time off to attend the court hearing in the GPL violation/infringement case that Christoph Hellwig has brought against VMware.

    I am not in any way legally involved in the lawsuit. However, as a fellow (former) Linux kernel developer myself, and a long-term Free Software community member who strongly believes in the copyleft model, I of course am very interested in this case - and of course in an outcome in favor of the plaintiff. Nevertheless, the below report tries to provide an un-biased account of what happened at the hearing today, and does not contain my own opinions on the matter. I can always write another blog post about that :)

    I blogged about this case before briefly, and there is a lot of information publicly discussed about the case, including the information published by the Software Freedom Conservancy (see the link above, the announcement and the associated FAQ.

    Still, let's quickly summarize the facts:

    • VMware is using parts of the Linux kernel in their proprietary ESXi product, including the entire SCSI mid-layer, USB support, radix tree and many, many device drivers.
    • as is generally known, Linux is licensed under GNU GPLv2, a copyleft-style license.
    • VMware has modified all the code they took from the Linux kernel and integrated them into something they call vmklinux.
    • VMware has modified their proprietary virtualization OS kernel vmkernel with specific API/symbol to interact with vmklinux
    • at least in earlier versions of ESXi, virtually any block device access has to go through vmklinux and thus the portions of Linux they took
    • vmklinux and vmkernel are dynamically linked object files that are linked together at run-time
    • the Linux code they took runs in the same execution context (address space, stack, control flow) like the vmkernel.

    Ok, now enter the court hearing of today.

    Christoph Hellwig was represented by his two German Lawyers, Dr. Till Jaeger and Dr. Miriam Ballhausen. VMware was represented by three German lawyers lead by Matthias Koch, as well as a US attorney, Michael Jacobs (by means of two simultaneous interpreters). There were also several members of the in-house US legal team of VMware present, but not formally representing the defendant in court.

    As is unusual for copyright disputes, there was quite some audience following the court. Next to the VMware entourage, there were also a couple of fellow Linux kernel developers as well as some German IT press representatives following the hearing.

    General Introduction of the presiding judge

    After some formalities (like the question whether or not a ',' is missing after the "Inc." in the way it is phrased in the lawsuit), the presiding judge started with some general remarks

    • the court is well aware of the public (and even international public) interest in this case
    • the court understands there are novel fundamental legal questions raised that no court - at least no German court - had so far to decide upon.
    • the court also is well aware that the judges on the panel are not technical experts and thus not well-versed in software development or computer science. Rather, they are a court specialized on all sorts of copyright matters, not particularly related to software.
    • the court further understands that Linux is a collaborative, community-developed operating system, and that the development process is incremental and involves many authors.
    • the court understands there is a lot of discussion about interfaces between different programs or parts of a program, and that there are a variety of different definitions and many interpretations of what interfaces are

    Presentation about the courts understanding of the subject matter

    The presiding judge continued to explain what was their understanding of the subject matter. They understood VMware ESXi serves to virtualize a computer hardware in order to run multiple copies of the same or of different versions of operating systems on it. They also understand that vmkernel is at the core of that virtualization system, and that it contains something called vmkapi which is an interface towards Linux device drivers.

    However, they misunderstood that this case was somehow an interface between a Linux guest OS being virtualized on top of vmkernel. It took both defendant and plaintiff some time to illustrate that in fact this is not the subject of the lawsuit, and that you can still have portions of Linux running linked into vmkernel while exclusively only virtualizing Windows guests on top of vmkernel.

    The court went on to share their understanding of the GPLv2 and its underlying copyleft principle, that it is not about abandoning the authors' rights but to the contrary exercising copyright. They understood the license has implications on derivative works and demonstrated that they had been working with both the German translation a well as the English language original text of GPLv2. At least I was sort-of impressed by the way they grasped it - much better than some of the other courts that I had to deal with in the various cases I was bringing forward during my gpl-violations.org work before.

    They also illustrated that they understood that Christoph Hellwig has been developing parts of the Linux kernel, and that modified parts of Linux were now being used in some form in VMware ESXi.

    After this general introduction, there was the question of whether or not both parties would still want to settle before going further. The court already expected that this would be very unlikely, as it understood that the dispute serves to resolve fundamental legal question, and there is hardly any compromise in the middle between using or not using the Linux code, or between licensing vmkernel under a GPL compatible license or not. And as expected, there was no indication from either side that they could see an out-of-court settlement of the dispute at this point.

    Right to sue / sufficient copyrighted works of the plaintiff

    There was quite some debate about the question whether or not the plaintiff has shown that he actually holds a sufficient amount of copyrighted materials.

    The question here is not, whether Christoph has sufficient copyrightable contributions on Linux as a whole, but for the matter of this legal case it is relevant which of his copyrighted works end up in the disputed product VMware ESXi.

    Due to the nature of the development process where lots of developers make intermittent and incremental changes, it is not as straight-forward to demonstrate this, as one would hope. You cannot simply print an entire C file from the source code and mark large portions as being written by Christoph himself. Rather, lines have been edited again and again, were shifted, re-structured, re-factored. For a non-developer like the judges, it is therefore not obvious to decide on this question.

    This situation is used by the VMware defense in claiming that overall, they could only find very few functions that could be attributed to Christoph, and that this may altogether be only 1% of the Linux code they use in VMware ESXi.

    The court recognized this as difficult, as in German copyright law there is the concept of fading. If the original work by one author has been edited to an extent that it is barely recognizable, his original work has faded and so have his rights. The court did not state whether it believed that this has happened. To the contrary, the indicated that it may very well be that only very few lines of code can actually make a significant impact on the work as a whole. However, it is problematic for them to decide, as they don't understand source code and software development.

    So if (after further briefs from both sides and deliberation of the court) this is still an open question, it might very well be the case that the court would request a techncial expert report to clarify this to the court.

    Are vmklinux + vmkernel one program/work or multiple programs/works?

    Finally, there was some deliberation about the very key question of whether or not vmkernel and vmklinux were separate programs / works or one program / work in the sense of copyright law. Unfortunately only the very surface of this topic could be touched in the hearing, and the actual technical and legal arguments of both sides could not be heard.

    The court clarified that if vmkernel and vmklinux would be considered as one program, then indeed their use outside of the terms of the GPL would be an intrusion into the rights of the plaintiff.

    The difficulty is how to actually venture into the legal implications of certain technical software architecture, when the people involved have no technical knowledge on operating system theory, system-level software development and compilers/linkers/toolchains.

    A lot is thus left to how good and 'believable' the parties can present their case. It was very clear from the VMware side that they wanted to down-play the role and proportion of vmkernel and its Linux heritage. At times their lawyers made statements like linux is this small yellow box in the left bottom corner (of our diagram). So of course already the diagrams are drawn in a way to twist the facts according to their view on reality.

    Summary

    • The court seems very much interested in the case and wants to understand the details
    • The court recognizes the general importance of the case and the public interest in it
    • There were some fundamental misunderstandings on the technical architecture of the software under dispute that could be clarified
    • There are actually not that many facts that are disputed between both sides, except the (key, and difficult) questions on
      • does Christoph hold sufficient rights on the code to bring forward the legal case?
      • are vmkernel and vmklinux one work or two separate works?

    The remainder of this dispute will thus be centered on the latter two questions - whether in this court or in any higher courts that may have to re-visit this subject after either of the parties takes this further, if the outcome is not in their favor.

    In terms of next steps,

    • both parties have until April 15, 2016 to file further briefs to follow-up the discussions in the hearing today
    • the court scheduled May 19, 2016 as date of promulgation. However, this would of course only hold true if the court would reach a clear decision based on the briefs by then. If there is a need for an expert, or any witnesses need to be called, then it is likely there will be further hearings and no verdict will be reached by then.

    by Harald Welte at February 24, 2016 11:00 PM

    February 23, 2016

    Harald Welte

    Software under OSA Public License is neither Open Source nor Free Software

    It seems my recent concerns on the OpenAirInterface re-licensing were not unjustified.

    I contacted various legal experts on Free Software legal community about this, and the response was unanimous: In all feedback I received, the general opinion was that software under the OSA Public License V1.0 is neither Free Software nor Open Source Software.

    The rational is, that it does not fulfill the criteria of

    • the FSF Free Software definition, as the license does not fulfill freedom 0: The freedom to run the program as you wish, for any purpose (which obviously includes commercial use)
    • the Open Source Initiatives Open Source Definition, as the license must not discriminate against fields of endeavor, such as commercial use.
    • the Debian Free Software Guidelines, as the DFSG also require no discrimination against fields of endeavor, such as commercial use.

    I think we as the community need to be very clear about this. We should not easily tolerate that people put software under restrictive licenses but still call that software open source. This creates a bad impression to those not familiar with the culture and spirit of both Free Software and Open Source. It creates the impression that people can call something Open Source but then still ask royalties for it, if used commercially.

    It is a shame that entities like Eurecom and the OpenAirInterface Software Association are open-washing their software by calling it Open Source when in fact it isn't. This attitude frankly makes me sick.

    That's just like green-washing when companies like BP are claiming they're now an environmental friendly company just because they put some solar panels on the roof of some building.

    by Harald Welte at February 23, 2016 11:00 PM

    Bunnie Studios

    The Story Behind the Cover for The Essential Guide to Electronics in Shenzhen

    First, I want to say wow! I did not expect such a response to this book. When preparing for the crowdfunding campaign, I modeled several scenarios, and none of them predicted an outcome like this.

    The Internet has provided fairly positive feedback on the cover of the book. I’m genuinely flattered that people like how it turned out. There’s actually an interesting story behind the origins of the book cover, which is the topic of this post.

    It starts with part of a blog post series I did a while back, “The Factory Floor, Part 3 of 4: Industrial Design for Startups”. In that post, I outline a methodology for factory-aware design, and I applied these methods when designing my book cover. In particular, step 3 & 4 read:

    3. Visit the facility, and take note of what is actually running down the production lines. … Practice makes perfect, and from the operators to the engineers they will do a better job of executing things they are doing on a daily basis than reaching deep and exercising an arcane capability.

    4. Re-evaluate the design based on a new understanding of what’s possible, and iterate.

    My original cover design was going to be fairly conventional – your typical cardboard laminated in four color printing, or perhaps even a soft cover, and the illustration was to be done by the same fellow who did the cute bunny pictures that preface each chapter, Miran Lipovača.

    But, as a matter of practicing what I preach, I made a visit to the printing factory to see what was running down its lines. They had all manners of processes going on in the factory, from spine stitching to die cutting and lamination.


    Chibitronics’ Circuit Sticker Sketchbook is also printed at this factory

    One process in particular caught my eye – in the back, there was a room full of men using belt sanders with varying grits of sand paper to work the edges of books until they were silky smooth. Next to that was a hot foil transfer machine – through heat and pressure, it can apply a gold (or any other color) foil to the surface of paper. In this case, they were gilding the edges of books, in a style similar to that found on fancy bibles and prayer books. They could also use the same process to do a foil deboss on cardboard.


    Beltsanding the edges of a stack of books until they are silky smooth


    Closeup of the hot foil transfer mechanism


    Stacks of books with gleaming, gilded edges

    This is when I got the idea for the cover. These gilded books looked beautiful – and because the process is done in-house, I knew I could get it for a really good price. So, I went back to the drawing board and thought about what would look good using this process. The first idea was to take the bunny picture, and adapt it for the gold foil process. Unfortunately, the bunny illustrations relied heavily upon halftone grays, something which wouldn’t translate well into a gold foil process. Someone else suggested that perhaps I should do a map of China, with Shenzhen marked and some pictures of components around it. I didn’t like it for a number of reasons, the first one being the headache of securing the copyright to a decent map of China that was both geographically accurate and politically correct.

    So I did a Google image search for “gold leaf covers” just to see what’s out there. The typical motif I observed was some kind of filigree, typically with at least left/right symmetry, if not also up/down symmetry.

    I thought maybe I’d go and fire up Adobe Illustrator and start sketching some filigree patterns, but quickly gave up on that idea – it was a lot of work, and I’m not entirely comfortable with that tool. Then it hit upon me that individual PCB layers have the same sort of intricacy as a filligree – and I live and breathe PCB design.

    So, I started up my favorite PCB design package, Altium. I tried playing around a bit with the polygon fill function, using its hashing feature and adjusting the design rules to see if I couldn’t make a decent filigree with it. The effect seemed reasonable, especially when I used a fairly coarse fill and an additional design rule that caused polygon fills to keep a wide berth around any traces.

    Then I had to come up with some circuitry to fill the cover. I looked at a few of my circuit boards, and in reality, few practical circuits had the extreme level of symmetry I was looking for. So I went ahead and cocked up a fake circuit on the fly. I made a QFN footprint based on fictional design rules that would look good, and sorted through my library of connector footprints for ones that had large enough pads to print reasonably well using the foil transfer process. I found a 2.4GHz antenna and some large-ish connectors.

    I then decided upon a theme – generally, I wanted the book to go from RF on the bottom to digital on the top. So I started by drawing the outline of an A5 page, and putting a couple lines of symmetry down. In the lower left, I placed the 2.4 GHz antenna, and then coupled it to a QFN in a semi-realistic fashion, throwing a couple of capacitors in for effect. I added an SMA connector that spanned the central symmetry line, and then an HRS DF-11 connector footprint above it. I decided in the RF section I’d make extensive use of arcs in the routing, calling upon a motif quite common in RF design and visually distinct from digital routing. Next I added a SATA connector off to the middle edge, and routed a set of differential pairs to the TX/RX pads, to which I applied the trace length equalization feature of the PCB tool to make them wavy – just for added aesthetic effect.

    Then I started from the top left and designed the digital section. Nothing says “old school digital” to me louder than a DB-9 connector (and yes, you pedants, it’s technically a DE-9, but in my heart it will always be a DB-9), so I plopped one of those down up top. I decided I’d spice things up a bit by throwing series termination resistors between the connector and a fake QFN IC; yes, in reality, not all pins would have these, but I thought it looked more aesthetic to put it on all the pins. Then, I routed signals from the QFN as a bus, this time using 45 degree angles, to a 14-pin JTAG connector which I placed in the heart of the book. Everything starts and ends with the JTAG connector these days, so why not?

    The design now occupied just the left half of the board. I copied it, flipped it, and pasted it to create a perfect 2-fold symmetry around the vertical axis.

    Around all of this, I put a border with fiducials and gutters, the same as you would find in a PCB destined for production in an automated SMT line. You’ll notice I break symmetry by making the top right fiducial a square, not a circle; this is a hallmark feature of fiducials, since their purpose is to both align the vision recognition systems and determine if the PCB has been loaded into the machine correctly.

    Finally, I added the book title and author using Altium’s TrueType string facility, and ran an automated fill of the empty space to create the filigree.

    I actually designed the whole cover while I was on the long flight from Hong Kong to Amsterdam for 32C3. I find that airplane flights are excellent for doing PCB routing and design work like this, free of any distractions from the Internet. As a bonus, every now and then someone comes along and feeds you and tops up your glass of wine, allowing your creative streak to be unbroken by concerns about hunger or sobriety.

    When viewed in black and white, the book cover honestly looks a little “meh” – when I first saw it, I was thought, “well, at least maybe the geeks will appreciate it”. But after seeing the faux-linen with gold foil transfer sample, I knew this was the design I would run with for production.

    The next difficult challenge was to not paint legs on the metaphorical snake. As an engineer, I disliked how over-simplified the design was. There really should be bypass capacitors around the digital components. And SATA requires series DC blocking caps. But I had to let all that go, set it aside, and stop looking at it as a design, and let it live its own life as the cover of a book.

    And so there you have it – the story behind perhaps the only book cover designed using Altium (if you have a gerber viewer, you can check out the gerber files). The design went from a .PcbDoc file, to a .DXF, to .AI, and finally placed in a .INDD – not your typical progression of file formats, but in the end, it was fun and worthwhile figuring it all out.

    Thanks again to everyone who helped promote and fund my book. I’m really excited to get started on the print run. The problem I’m facing now is I don’t know how many to print. Originally, I was fairly certain no matter what, I would just barely hit the minimum order quantity (MOQ) of 1,000 books. Now that the campaign has blown past that, I have to wait until the campaign finishes in 23 days before I know what to put on the purchase order to the manufacturer. And, shameless plug – if you’re interested in the book, it’s $5 cheaper if you back during the campaign, so consider getting your order in before the prices go up.

    by bunnie at February 23, 2016 06:07 PM

    February 22, 2016

    Bunnie Studios

    Name that Ware, February 2016

    The Ware for February 2016 is shown below.

    I couldn’t bring myself to blemish this beautiful ware by pixelating all of the part numbers necessary to make this month’s game a real challenge. Instead, I just relied upon a strategic cropping to remove the make and model number from the lower left corner of the board.

    Remember the TMS4464? Yah, back when TI’s thing was making DRAM, not voltage regulators, and when Foxconn made connectors, not iPhones. Somewhere along the way, some business guy coined the term “pivot” to describe such changes in business models.

    Thanks to Michael Steil for sharing this beautiful piece of history with me at 32C3!

    by bunnie at February 22, 2016 08:02 AM

    Winner, Name that Ware January 2016

    The Ware for January 2016 was a TPI model 342 water resistant, dual-input type K&J thermocouple thermometer. Picking a winner was tough. Eric Hill was extremely close on guessing the model number — probably the only difference between the TPI 343 and the 342 is a firmware change and perhaps the button that lets you pick between K/J type thermocouples, neither of which would be obvious from the image shown.

    However, I do have to give kudos to CzajNick for pointing out that the MCU in this is a 4-bit microcontroller. Holy shit, I didn’t know they made those anymore, much less be useful for anything beyond a calculator. This is probably the only functional 4-bit machine that I have in my lab. All of a sudden this thermometer got a little bit cooler in my mind. He also correctly identified the ware as some type of double-input thermocouple thermometer in the course of his analysis.

    Despite not citing a specific make/model, I really appreciated the analysis, especially the factoid about this having a 4-bit microcontroller, so I’ll declare CzajNick the winner. Congrats and email me for your prize!

    Also, I’ll have to say, after tearing apart numerous pieces of shoddy Chinese test equipment to fix stupid problems in them, it was a real sight for sore eyes to see such a clean design with high quality, brand-name components. I guess this is 90’s-vintage Korean engineering for you — a foreshadowing of the smartphone onslaught to come out of the same region a decade later.

    by bunnie at February 22, 2016 08:02 AM

    February 20, 2016

    Harald Welte

    Osmocom.org migrating to redmine

    In 2008, we started bs11-abis, which was shortly after renamed to OpenBSC. At the time it seemed like a good idea to use trac as the project management system, to have a wiki and an issue tracker.

    When further Osmocom projects like OsmocomBB, OsmocomTETRA etc. came around, we simply replicated that infrastructure: Another trac instance with the same theme, and a shared password file.

    The problem with this (and possibly the way we used it) is:

    • it doesn't scale, as creating projects is manual, requires a sysadmin and is time-consuming. This meant e.g. SIMtrace was just a wiki page in the OsmocomBB trac installation + associated http redirect, causing some confusion.
    • issues can not easily be moved from one project to another, or have cross-project relationships (like, depend on an issue in another project)
    • we had to use an external planet in order to aggregate the blog of each of the trac instances
    • user account management the way we did it required shell access to the machine, meaning user account applications got dropped due to the effort involved. My apologies for that.

    Especially the lack of being able to move pages and tickets between trac's has resulted in a suboptimal use of the tools. If we first write code as part of OpenBSC and then move it to libosmocore, the associated issues + wiki pages should be moved to a new project.

    At the same time, for the last 5 years we've been successfully using redmine inside sysmocom to keep track of many dozens of internal projects.

    So now, finally, we (zecke, tnt, myself) have taken up the task to migrate the osmocom.org projects into redmine. You can see the current status at http://projects.osmocom.org/. We could create a more comprehensive project hierarchy, and give libosmocore, SIMtrace, OsmoSGSN and many others their own project.

    Thanks to zecke for taking care of the installation/sysadmin part and the initial conversion!

    Unfortunately the conversion from trac to redmine wiki syntax (and structure) was not as automatic and straight-forward as one would have hoped. But after spending one entire day going through the most important wiki pages, things are looking much better now. As a side effect, I have had a more comprehensive look into the history of all of our projects than ever before :)

    Still, a lot of clean-up and improvement is needed until I'm happy, particularly splitting the OpenBSC wiki into separate OsmoBSC, OsmoNITB, OsmoBTS, OsmoPCU and OsmoSGSN wiki's is probably still going to take some time.

    If you would like to help out, feel free to register an account on projects.osmocom.org (if you don't already have one from the old trac projects) and mail me for write access to the project(s) of your choice.

    Possible tasks include

    • putting pages into a more hierarchic structure (there's a parent/child relationship in redmine wikis)
    • fixing broken links due to page renames / wiki page moves
    • creating a new redmine 'Project' for your favorite tool that has a git repo on http://git.osmocom.org/ and writing some (at least initial) documentation about it.

    You don't need to be a software developer for that!

    by Harald Welte at February 20, 2016 11:00 PM

    February 19, 2016

    Harald Welte

    Some update on recent OsmoBTS changes

    After quite some time of gradual bug fixing and improvement, there have been quite some significant changes being made in OsmoBTS over the last months.

    Just a quick reminder: In Fall 2015 we finally merged the long-pending L1SAP changes originally developed by Jolly, introducing a new intermediate common interface between the generic part of OsmoBTS, and the hardware/PHY specific part. This enabled a clean structure between osmo-bts-sysmo (what we use on the sysmoBTS) and osmo-bts-trx (what people with general-purpose SDR hardware use).

    The L1SAP changes had some fall-out that needed to be fixed, not a big surprise with any change that big.

    More recently however, three larger changes were introduced:

    proper Multi-TRX support

    Based on the above phy_link/phy_instance infrastructure, one can map each phy_instance to one TRX by means of the VTY / configuration file.

    The core of OsmoBTS now supports any number of TRXs, leading to flexible Multi-TRX support.

    OCTPHY support

    A Canadian company called Octasic has been developing a custom GSM PHY for their custom multi-core DSP architecture (OCTDSP). Rather than re-inventing the wheel for everything on top of the PHY, they chose to integrate OsmoBTS on top of it. I've been working at sysmocom on integrating their initial code into OsmoBTS, rendering a new osmo-bts-octphy backend.

    This back-end has also recently been ported to the phy_link/phy_instance API and is Multi-TRX ready. You can both run multiple TRX in one DSP, as well as have multiple DSPs in one BTS, paving the road for scalability.

    osmo-bts-octphy is now part of OsmoBTS master.

    Corresponding changes to OsmoPCU (for full GPRS support on OCTPHY) are currently been worked on by Max at sysmocom.

    Litecell 1.5 PHY support

    Another Canadian company (Nutaq/Nuran) has been building a new BTS called Litecell 1.5. They also implemented OsmoBTS support, based on the osmo-bts-sysmo code. We've been able to integrate that code with the above-mentioned phy_link/phy_interface in order to support the MultiTRX capability of this hardware.

    Litecell 1.5 MultiTRX capability has also been integrated with OsmoPCU.

    osmo-bts-litecell15 is now part of OsmoBTS master.

    Summary

    • 2016 starts as the OsmoBTS year of MultiTRX.
    • 2016 also starts as a year of many more hardware choices for OsmoBTS
    • we see more commercial adoption of OsmoBTS outside of the traditional options of sysmocom and Fairwaves

    by Harald Welte at February 19, 2016 11:00 PM

    February 18, 2016

    Free Electrons

    Free Electrons speaking at the Linux Collaboration Summit

    Free Electrons engineers are regular speakers at the Embedded Linux Conference and Embedded Linux Conference Europe events from the Linux Foundation, to which our entire engineering team participates each year.

    In 2016, for the first time, we will also be speaking at the Collaboration Summit, an invitation-only event where, as the Linux Foundation presents it, “the world’s thought leaders in open source software and collaborative development convene to share best practices and learn how to manage the largest shared technology investments of our time”.

    Collaboration Summit 2016

    This event will take place on March 29-31 in Lake Tahoe, California, and the event schedule has been published recently. Free Electrons CTO Thomas Petazzoni will be giving a talk

    Upstreaming hardware support in the Linux kernel: why and how?, during which we will share our experience working with HW manufacturers to bring the support for their hardware to the upstream Linux kernel, discuss the benefits of upstreaming, and best practices to work with upstream.

    With a small team of engineers, Free Electrons has merged over the last few years thousands of patches in the official Linux kernel, and has several of its engineers having maintainer positions in the Linux kernel community. We are happy to take the opportunity of the Collaboration Summit to share some of our experience, and hopefully encourage and help other companies to participate upstream.

    by Thomas Petazzoni at February 18, 2016 04:16 PM

    February 15, 2016

    Free Electrons

    Initial support for ARM64 Marvell Armada 7K/8K platform

    Two weeks ago, we submitted the initial support for the Marvell Armada 3700, which was the first ARM64 platform that Free Electrons engineers contributed to the upstream Linux kernel.

    Today, we submitted initial support for another Marvell ARM64 platform, the Armada 7K and Armada 8K platform. Compared to the Armada 3700, the Armada 7K and 8K are much more on the high-end side: they use a dual Cortex-A72 or a quad Cortex-A72, as opposed to the Cortex-A53 for the Armada 3700.

    Marvell Armada 7KMarvell Armada 8K

    The Armada 7K and 8K also use a fairly unique architecture, internally they are composed of several components:

    • One AP (Application Processor), which contains the processor itself and a few core hardware blocks. The AP used in the Armada 7K and 8K is called AP806, and is available in two configurations: dual Cortex-A72 and quad Cortex-A72.
    • One or two CP (Communication Processor), which contain most of the I/O interfaces (SATA, PCIe, Ethernet, etc.). The 7K family chips have one CP, while the 8K family chips integrate two CPs, providing two times the number of I/O interfaces available in the CP. The CP used in the 7K and 8K is called CP110.

    All in all, this gives the following combinations:

    • Armada 7020, which is a dual Cortex-A72 with one CP
    • Armada 7040, which is a quad Cortex-A72 with one CP
    • Armada 8020, which is a dual Cortex-A72 with two CPs
    • Armada 8040, which is a quad Cortex-A72 with two CPs

    So far, we submitted initial support only for the AP806 part of the chip, with the following patch series:

    We will continue to submit more and more patches to support other features of the Armada 7K and 8K processors in the near future.

    by Thomas Petazzoni at February 15, 2016 11:02 AM

    Factory flashing with U-Boot and fastboot on Freescale i.MX6

    Introduction

    For one of our customers building a product based on i.MX6 with a fairly low-volume, we had to design a mechanism to perform the factory flashing of each product. The goal is to be able to take a freshly produced device from the state of a brick to a state where it has a working embedded Linux system flashed on it. This specific product is using an eMMC as its main storage, and our solution only needs a USB connection with the platform, which makes it a lot simpler than solutions based on network (TFTP, NFS, etc.).

    In order to achieve this goal, we have combined the imx-usb-loader tool with the fastboot support in U-Boot and some scripting. Thanks to this combination of a tool, running a single script is sufficient to perform the factory flashing, or even restore an already flashed device back to a known state.

    The overall flow of our solution, executed by a shell script, is:

    1. imx-usb-loader pushes over USB a U-Boot bootloader into the i.MX6 RAM, and runs it;
    2. This U-Boot automatically enters fastboot mode;
    3. Using the fastboot protocol and its support in U-Boot, we send and flash each part of the system: partition table, bootloader, bootloader environment and root filesystem (which contains the kernel image).
    The SECO uQ7 i.MX6 platform used for our project.

    The SECO uQ7 i.MX6 platform used for our project.

    imx-usb-loader

    imx-usb-loader is a tool written by Boundary Devices that leverages the Serial Download Procotol (SDP) available in Freescale i.MX5/i.MX6 processors. Implemented in the ROM code of the Freescale SoCs, this protocol allows to send some code over USB or UART to a Freescale processor, even on a platform that has nothing flashed (no bootloader, no operating system). It is therefore a very handy tool to recover i.MX6 platforms, or as an initial step for factory flashing: you can send a U-Boot image over USB and have it run on your platform.

    This tool already existed, we only created a package for it in the Buildroot build system, since Buildroot is used for this particular project.

    Fastboot

    Fastboot is a protocol originally created for Android, which is used primarily to modify the flash filesystem via a USB connection from a host computer. Most Android systems run a bootloader that implements the fastboot protocol, and therefore can be reflashed from a host computer running the corresponding fastboot tool. It sounded like a good candidate for the second step of our factory flashing process, to actually flash the different parts of our system.

    Setting up fastboot on the device side

    The well known U-Boot bootloader has limited support for this protocol:

    The fastboot documentation in U-Boot can be found in the source code, in the doc/README.android-fastboot file. A description of the available fastboot options in U-Boot can be found in this documentation as well as examples. This gives us the device side of the protocol.

    In order to make fastboot work in U-Boot, we modified the board configuration file to add the following configuration options:

    #define CONFIG_CMD_FASTBOOT
    #define CONFIG_USB_FASTBOOT_BUF_ADDR       CONFIG_SYS_LOAD_ADDR
    #define CONFIG_USB_FASTBOOT_BUF_SIZE          0x10000000
    #define CONFIG_FASTBOOT_FLASH
    #define CONFIG_FASTBOOT_FLASH_MMC_DEV    0
    

    Other options have to be selected, depending on the platform to fullfil the fastboot dependencies, such as USB Gadget support, GPT partition support, partitions UUID support or the USB download gadget. They aren’t explicitly defined anywhere, but have to be enabled for the build to succeed.

    You can find the patch enabling fastboot on the Seco MX6Q uQ7 here: 0002-secomx6quq7-enable-fastboot.patch.

    U-Boot enters the fastboot mode on demand: it has to be explicitly started from the U-Boot command line:

    U-Boot> fastboot
    

    From now on, U-Boot waits over USB for the host computer to send fastboot commands.

    Using fastboot on the host computer side

    Fastboot needs a user-space program on the host computer side to talk to the board. This tool can be found in the Android SDK and is often available through packages in many Linux distributions. However, to make things easier and like we did for imx-usb-loader, we sent a patch to add the Android tools such as fastboot and adb to the Buildroot build system. As of this writing, our patch is still waiting to be applied by the Buildroot maintainers.

    Thanks to this, we can use the fastboot tool to list the available fastboot devices connected:

    # fastboot devices
    

    Flashing eMMC partitions

    For its flashing feature, fastboot identifies the different parts of the system by names. U-Boot maps those names to the name of GPT partitions, so your eMMC normally requires to be partitioned using a GPT partition table and not an old MBR partition table. For example, provided your eMMC has a GPT partition called rootfs, you can do:

    # fastboot flash rootfs rootfs.ext4
    

    To reflash the contents of the rootfs partition with the rootfs.ext4 image.

    However, while using GPT partitioning is fine in most cases, i.MX6 has a constraint that the bootloader needs to be at a specific location on the eMMC that conflicts with the location of the GPT partition table.

    To work around this problem, we patched U-Boot to allow the fastboot flash command to use an absolute offset in the eMMC instead of a partition name. Instead of displaying an error if a partition does not exists, fastboot tries to use the name as an absolute offset. This allowed us to use MBR partitions and to flash at defined offset our images, including U-Boot. For example, to flash U-Boot, we use:

    # fastboot flash 0x400 u-boot.imx
    

    The patch adding this work around in U-Boot can be found at 0001-fastboot-allow-to-flash-at-a-given-address.patch. We are working on implementing a better solution that can potentially be accepted upstream.

    Automatically starting fastboot

    The fastboot command must be explicitly called from the U-Boot prompt in order to enter fastboot mode. This is an issue for our use case, because the flashing process can’t be fully automated and required a human interaction. Using imx-usb-loader, we want to send a U-Boot image that automatically enters fastmode mode.

    To achieve this, we modified the U-Boot configuration, to start the fastboot command at boot time:

    #define CONFIG_BOOTCOMMAND "fastboot"
    #define CONFIG_BOOTDELAY 0
    

    Of course, this configuration is only used for the U-Boot sent using imx-usb-loader. The final U-Boot flashed on the device will not have the same configuration. To distinguish the two images, we named the U-Boot image dedicated to fastboot uboot_DO_NOT_TOUCH.

    Putting it all together

    We wrote a shell script to automatically launch the modified U-Boot image on the board, and then flash the different images on the eMMC (U-Boot and the root filesystem). We also added an option to flash an MBR partition table as well as flashing a zeroed file to wipe the U-Boot environment. In our project, Buildroot is being used, so our tool makes some assumptions about the location of the tools and image files.

    Our script can be found here: flash.sh. To flash the entire system:

    # ./flash.sh -a
    

    To flash only certain parts, like the bootloader:

    # ./flash.sh -b 
    

    By default, our script expects the Buildroot output directory to be in buildroot/output, but this can be overridden using the BUILDROOT environment variable.

    Conclusion

    By assembling existing tools and mechanisms, we have been able to quickly create a factory flashing process for i.MX6 platforms that is really simple and efficient. It is worth mentioning that we have re-used the same idea for the factory flashing process of the C.H.I.P computer. On the C.H.I.P, instead of using imx-usb-loader, we have used FEL based booting: the C.H.I.P indeed uses an Allwinner ARM processor, providing a different recovery mechanism than the one available on i.MX6.

    by Antoine Ténart at February 15, 2016 09:55 AM

    February 14, 2016

    Harald Welte

    Back from netdevconf 1.1 in Seville

    I've had the pleasure of being invited to netdevconf 1.1 in Seville, spain.

    After about a decade of absence in the Linux kernel networking community, it was great to meet lots of former colleagues again, as well as to see what kind of topics are currently being worked on and under discussion.

    The conference had a really nice spirit to it. I like the fact that it is run by the community itself. Organized by respected members of the community. It feels like Linux-Kongress or OLS or UKUUG or many others felt in the past. There's just something that got lost when the Linux Foundation took over (or pushed aside) virtually any other Linux kernel related event on the planet in the past :/ So thanks to Jamal for starting netdevconf, and thanks to Pablo and his team for running this particular instance of it.

    I never really wanted to leave netfilter and the Linux kernel network stack behind - but then my problem appears to be that there are simply way too many things of interest to me, and I had to venture first into RFID (OpenPCD, OpenPICC), then into smartphone hardware and software (Openmoko) and finally embark on a journey of applied telecoms archeology by starting OpenBSC, OsmocomBB and various other Osmocom projects.

    Staying in Linux kernel networking land was simply not an option with a scope that can only be defined as wide as wanting to implement any possible protocol on any possible interface of any possible generation of cellular network.

    At times like attending netdevconf I wonder if I made the right choice back then. Linux kernel networking is a lot of fun and hard challenges, too - and it is definitely an area that's much more used by many more organizations and individuals: The code I wrote on netfilter/iptables is probably running on billions of devices by now. Compare that to the Osmocom code, which is probably running on a few thousands of devices, if at all. Working on Open Source telecom protocols is sometimes a lonely fight. Not that I wouldn't value the entire team of developers involved in it. to the contrary. But lonely in the context that 99.999% of that world is a proprietary world, and FOSS cellular infrastructure is just the 0.001% at the margin of all of that.

    One the Linux kernel side, you have virtually every IT company putting in their weight these days, and properly funded development is not that hard to come by. In cellular, reasonable funding for anything (compared to the scope and complexity of the tasks) is rather the exception than the norm.

    But no, I don't have any regrets. It has been an interesting journey and I probably had the chance to learn many more things than if I had stayed in TCP/IP-land.

    If only each day had 48 hours and I could work both on Osmocom and on the Linux kernel...

    by Harald Welte at February 14, 2016 11:00 PM

    February 12, 2016

    Video Circuits

    Glass House (1983)




    "Video by G.G. Aries
    Music by Emerald Web
    from California Images: Hi Fi For The Eyes"

    by Chris (noreply@blogger.com) at February 12, 2016 03:16 AM

    February 11, 2016

    Elphel

    NC393 camera is fit for flight

    The components for 10393 and other related circuit boards for the new NC393 camera series have been ordered and contract manufacturing (CM) is ready to assemble the first batch of camera boards.

    In the meantime, the extruded parts that will be made into NC393 camera body have been received at Elphel. The extrusion looks very slick with thin, 1mm walls made out of strong 6061-T6 aluminium, and weighs only 55g. The camera’s new lightweight design is suitable for use on a small aircraft. The heat frame responsible for cooling the powerful processor has also been extruded.

    We are very pleased with the performance of Profile Precision Extrusions located in Phoenix, Arizona, which have delivered a very accurate product ahead of the proposed schedule. Now we can proudly engrave “Made in USA” on the camera, as now even the camera body parts are made in the United States.

    Of course, we have tried to order the extrusion in China, but the intricately detailed profile is difficult to extrude and tolerances were hard to match, so when Profile Precision was recommended to us by local extrusion facilities we were happy to discover the outstanding quality this company offers.

     

    extrusion_393 extrusion_393_heatFrame 4extrusions_393

     

    While waiting for the extruded parts we have been playing with another new toy: the 3D printer. We have been creating prototypes of various camera models of the NC393 series. The cameras are designed and modelled in a 3D virtual environment, and can viewed and even taken apart by mouse click thanks to X3dom technology. The next step is to build actual parts on the 3D printer and physically assemble the camera prototypes, which will allow us to start using the prototypes in the physical world: finding what features are missing, and correcting and finalizing the design. For example, when the mini-panoramic NC393-4PI4 camera prototype was assembled it was clear that it needs the 4 fins (now seen on the final model) to protect the lenses from touching the surfaces as well as to provide shade from the sun. NC393-4PI4 and NC393-4PI4-IMU-GPS are small 360 degree panoramic cameras assembled with 4 fish-eye lenses especially suitable for interior panoramic applications.

    The prototypes are not as slick as the actual aluminium bodies, but they give a very good example of what the actual cameras will look like.

     

    NC393_parts_prototype NC393-M2242-CS_prototype1 NC393-4PI4-IMU-GPS_prototype2

     

    As of today, the 10393 and other boards are in production, the prototypes are being built and tested for design functionality, and the aluminium extrusions have been received. With all this taken care of, we are now less than one month away from the NC393 being offered for sale; the first cameras will be distributed to the loyal Elphel customers who have placed and pre-paid orders several weeks ago.

    by olga at February 11, 2016 10:49 PM

    February 09, 2016

    Harald Welte

    netdevconf 1.1: Running cellular infrastructure on Linux

    Today I had the pleasure of presenting at netdevconf 1.1 a tutorial about Running cellular infrastructure on Linux. The tutorial is intended to guide you through the process of setting up + configuring yur own minimal private GSM+GPRS network.

    The video recording is available from https://www.youtube.com/watch?v=I4i2Gy4JhDo

    Slides are available at http://git.gnumonks.org/index.html/laforge-slides/plain/2016/netdevconf-osmocom/running-foss-gsm.html

    by Harald Welte at February 09, 2016 11:00 PM

    February 04, 2016

    osPID

    Brand New Shinning Website

    We’ve been working hard over the last month or so getting our old website sorted out. Out of date software running on the site, an enormous amount of spam on the forum, and software update mishaps lead us to completely redo everything.  The new website runs completely on WordPress, removing the wiki software (Mediawiki) and the forum software (phpbb). Now, both the forum and wiki are served through WordPress using bbPress and custom posts respectively. We did our best to migrate all content over from the old platforms.  The wiki content came over perfectly, and we were even able to add some updates.  The forum was also ported (posts/topics/accounts)  but we were unable to bring over account passwords.  As a result you will need to do a password reset before using the new forum. We’re sorry about the inconvenience.

    We hope that this  new website will help us better serve the osPID community. Please let us know if there are any broken links or other issues with the website.

    Take care!

    by rocketscream at February 04, 2016 01:55 PM

    February 03, 2016

    Bunnie Studios

    Help Make “The Essential Guide to Electronics in Shenzhen” a Reality

    Readers of my blog know I’ve been going to Shenzhen for some time now. I’ve taken my past decade of experience and created a tool, in the form of a book, that can help makers, hackers, and entrepreneurs unlock the potential of the electronics markets in Shenzhen. I’m looking for your help to enable a print run of this book, and so today I’m launching a campaign to print “The Essential Guide to Electronics in Shenzhen”.

    As a maker and a writer, the process of creating the book is a pleasure, but I’ve come to dread the funding process. Today is like judgment day; after spending many months writing, I get to find out if my efforts are deemed worthy of your wallet. It’s compounded by the fact that funding a book is a chicken-and-egg problem; even though the manuscript is finished, no copies exist, so I can’t send it to reviewers for validating opinions. Writing the book consumes only time; but printing even a few bound copies for review is expensive.

    In this case, the minimum print run is 1,000 copies. I’m realistic about the market for this book – it’s most useful for people who have immediate plans to visit Shenzhen, and so over the next 45 days I think I’d be lucky if I got a hundred backers. However, I don’t have the cash to finance the minimum print run, so I’m hoping I can convince you to purchase a copy or two of the book in the off-chance you think you may need it someday. If I can hit the campaign’s minimum target of $10,000 (about 350 copies of the book), I’ll still be in debt, but at least I’ll have a hope of eventually recovering the printing and distribution costs.

    The book itself is the guide I wish I had a decade ago; you can have a brief look inside here. It’s designed to help English speakers make better use of the market. The bulk of the book consists of dozens of point-to-translate guides relating to electronic components, tools, and purchasing. It also contains supplemental chapters to give a little background on the market, getting around, and basic survival. It’s not meant to replace a travel guide; its primary focus is on electronics and enabling the user to achieve better and more reliable results despite the language barriers.

    Below is an example of a point-to-translate page:

    For example, the above page focuses on packaging. Once you’ve found a good component vendor, sometimes you find your parts are coming in bulk bags, instead of tape and reel. Or maybe you just need the whole thing put in a shipping box for easy transportation. This page helps you specify these details.

    I’ve put several pages of the guide plus the whole sales pitch on Crowd Supply’s site; I won’t repeat that here. Instead, over the coming month, I plan to post a couple stories about the “making of” the book.

    The reality is that products cost money to make. Normally, a publisher takes the financial risk to print and market a book, but I decided to self-publish because I wanted to add a number of custom features that turn the book into a tool and an experience, rather than just a novel.

    The most notable, and expensive, feature I added are the pages of blank maps interleaved with business card and sample holders.

    Note that in the pre-print prototype above, the card holder pages are all in one section, but the final version will have one card holder per map.

    When comparison shopping in the market, it’s really hard to keep all the samples and vendors straight. After the sixth straight shop negotiating in Chinese over the price of switches or cables, it’s pretty common that I’ll swap a business card, or a receipt will get mangled or lost. These pages enable me to mark the location of a vendor, associate it with a business card and pricing quotation, and if the samples are small (like the LEDs in the picture above) keep the sample with the whole set. I plan on using a copy of the book for every project, so a couple years down the road if someone asks me for another production run, I can quickly look up my suppliers. Keeping the hand-written original receipts is essential, because suppliers will often honor the pricing given on the receipt, even a couple years later, if you can produce it. The book is designed to give the best experience for sourcing components in the Shenzhen electronic markets.

    In order to accommodate the extra thickness of samples, receipts and business cards, the book is spiral-bound. The spiral binding is also convenient for holding a pen to take notes. Finally, the spiral binding also allows you to fold the book flat to a page of interest, allowing both the vendor and the buyer to stare at the same page without fighting to keep the book open. I added an elastic strap in the back cover that can be used as a bookmark, or to help keep the book closed if it starts to get particularly full.

    I also added tabbed pages at the beginning of every major section, to help with quickly finding pages of interest. Physical print books enable a fluidity in human interaction that smartphone apps and eBooks often fail to achieve. Staring at a phone to translate breaks eye contact, and the vendor immediately loses interest; momentum escapes as you scroll, scroll, scroll to the page of interest, struggle with auto-correction on a tiny on-screen keyboard, or worse yet stare at an hourglass as pages load from the cloud. But pull out the book and start thumbing through the pages, the vendor can also see and interact with the translation guide. They become a part of the experience; it’s different, interesting, and keeps their attention. Momentum is preserved as both of you point at various terms on the page to help clarify the transaction.

    Thus, I spent a fair bit of time customizing the physical design of the book to make it into a tool and an experience. I considered the human factors of the Shenzhen electronics market; this book is not just a dictionary. This sort of tweaking can only be done by working with the printer directly; we had to do a bit of creative problem solving to figure out a process that works to bring all these elements together that can also pump out books at a rate fast enough to keep it in the realm of affordability. Of course, the cost of these extra features are reflected in the book’s $35 cover price (discounted to $30 if you back the campaign now), but I think the book’s value as a sourcing and translation tool makes up for its price, especially compared to the cost of plane tickets. Or worse yet, getting the wrong part because of a failure to communicate, or losing track of a good vendor because a receipt got lost in a jumble of samples.

    This all bring me back to the point of this post. Printing the book is going to cost money, and I don’t have the cash to print and inventory the book on my own. If you think someday you might go to Shenzhen, or maybe you just like reading what I write or how the cover looks, please consider backing the campaign. If I can hit the minimum funding target in the next 45 days, it will enable a print run of 1,000 books and help keep it in stock at Crowd Supply.

    Thanks, and happy hacking!

    by bunnie at February 03, 2016 04:13 PM

    ZeptoBARS

    Noname TL431 : weekend die-shot

    Yet another noname TL431.
    Die size 730x571 µm.


    February 03, 2016 05:50 AM

    January 31, 2016

    Harald Welte

    On the OpenAirInterface re-licensing

    In the recent FOSDEM 2016 SDR Devroom, the Q&A session following a presentation on OpenAirInterface touched the topic of its controversial licensing. As I happen to be involved deeply with Free Software licensing and Free Software telecom topics, I thought I might have some things to say about this topic. Unfortunately the Q&A session was short, hence this blog post.

    As a side note, the presentation was actually certainly the least technical presentation in all of the FOSDEM SDR track, and that with a deeply technical audience. And probably the only presentation at all at FOSDEM talking a lot about "Strategic Industry Partners".

    Let me also state that I actually have respect for what OAI/OSA has been and still is doing. I just don't think it is attractive to the Free Software community - and it might actually not be Free Software at all.

    OpenAirInterface / History

    Within EURECOM, a group around Prof. Raymond Knopp has been working on a Free Software implementation of all layers of the LTE (4G) system known as OpenAirInterface. It includes the physical layer and goes through to the core network.

    The OpenAirInterface code was for many years under GPL license (GPLv2, other parts GPLv3). Initially the SVN repositories were not public (despite the license), but after some friendly mails one (at least I) could get access.

    I've read through the code at several points in the past, it often seemed much more like a (quick and dirty?) proof of concept implementation to me, than anything more general-purpose. But then, that might have been a wrong impression on my behalf, or it might be that this was simply sufficient for the kind of research they wanted to do. After all, scientific research and FOSS often have a complicated relationship. Researchers naturally have their papers as primary output of their work, and software implementations often are more like a necessary evil than the actual goal. But then, I digress.

    Now at some point in 2014, a new organization the OpenAirInterface Software Association (OSA) was established. The idea apparently was to get involved with the tier-1 telecom suppliers (like Alcatel, Huawei, Ericsson, ...) and work together on an implementation of Free Software for future mobile data, so-called 5G technologies.

    Telecom Industry and Patents

    In case you don't know, the classic telecom industry loves patents. Pretty much anything and everything is patented, and the patents are heavily enforced. And not just between Samsung and Apple, or more recently also Nokia and Samsung - but basically all the time.

    One of the big reasons why even the most simple UMTS/3G capable phones are so much more expensive than GSM/2G is the extensive (and expensive) list of patents Qualcomm requires every device maker to license. In the past, this was not even a fixed per-unit royalty, but the license depended on the actual overall price of the phone itself.

    So wanting to work on a Free Software implementation of future telecom standards with active support and involvement of the telecom industry obviously means contention in terms of patents.

    Re-Licensing

    The existing GPLv2/GPLv3 license of the OpenAirInterface code of course would have meant that contributions from the patent-holding telecom industry would have to come with appropriate royalty-free patent licenses. After all, of what use is it if the software is free in terms of copyright licensing, but then you still have the patents that make it non-free.

    Now the big industry of course wouldn't want to do that, so the OSA decided to re-license the code-base under a new license.

    As we apparently don't yet have sufficient existing Free Software licenses, they decided to create a new license. That new license (the OSA Public License V1.0 not only does away with copyleft, but also does away with a normal patent grant.

    This is very sad in several ways:

    • license proliferation is always bad. Major experts and basically all major entities in the Free Software world (FSF, FSFE, OSI, ...) are opposed to it and see it as a problem. Even companies like Intel and Google have publicly raised concern about license Proliferation.
    • abandoning copyleft. Many people particularly from a GNU/Linux background would agree that copyleft is a fair deal. It ensures that everyone modifying the software will have to share such modifications with other users in a fair way. Nobody can create proprietary derivatives.
    • taking away the patent grant. Even the non-copyleft Apache 2.0 License the OSA used as template has a broad patent grant, even for commercial applications. The OSA Public License has only a patent grant for use in research context

    In addition to this license change, the OSA also requires a copyright assignment from all contributors.

    Consequences

    What kind of effect does this have in case I want to contribute?

    • I have to sign away my copyright. The OSA can at any given point in time grant anyone whatever license they want to this code.
    • I have to agree to a permissive license without copyleft, i.e. everyone else can create proprietary derivatives of my work
    • I do not even get a patent grant from the other contributors (like the large Telecom companies).

    So basically, I have to sign away my copyright, and I get nothing in return. No copyleft that ensures other people's modifications will be available under the same license, no patent grant, and I don't even keep my own copyright to be able to veto any future license changes.

    My personal opinion (and apparently those of other FOSDEM attendees) is thus that the OAI / OSA invitation to contributions from the community is not a very attractive one. It might all be well and fine for large industry and research institutes. But I don't think the Free Software community has much to gain in all of this.

    Now OSA will claim that the above is not true, and that all contributors (including the Telecom vendors) have agreed to license their patents under FRAND conditions to all other contributors. It even seemed to me that the speaker at FOSDEM believed this was something positive in any way. I can only laugh at that ;)

    FRAND

    FRAND (Fair, Reasonable and Non-Discriminatory) is a frequently invoked buzzword for patent licensing schemes. It isn't actually defined anywhere, and is most likely just meant to sound nice to people who don't understand what it really means. Like, let's say, political decision makers.

    In practise, it is a disaster for individuals and small/medium sized companies. I can tell you first hand from having tried to obtain patent licenses from FRAND schemes before. While they might have reasonable per-unit royalties and they might offer those royalties to everyone, they typically come with ridiculous minimum annual fees.

    For example let's say they state in their FRAND license conditions you have to pay 1 USD per device, but a minimum of USD 100,000 per year. Or a similarly large one-time fee at the time of signing the contract.

    That's of course very fair to the large corporations, but it makes it impossible for a small company who sells maybe 10 to 100 devices per year, as the 100,000 / 10 then equals to USD 10k per device in terms of royalties. Does that sound fair and Non-Discriminatory to you?

    Summary

    OAI/OSA are trying to get a non-commercial / research-oriented foot into the design and specification process of future mobile telecom network standardization. That's a big and difficult challenge.

    However, the decisions they have taken in terms of licensing show that they are primarily interested in aligning with the large corporate telecom industry, and have thus created something that isn't really Free Software (missing non-research patent grant) and might in the end only help the large telecom vendors to uni-directionally consume contributions from academic research, small/medium sized companies and individual hackers.

    by Harald Welte at January 31, 2016 11:00 PM

    January 27, 2016

    January 26, 2016

    Michele's GNSS blog

    uBlox: Galileo, anti-jamming and anti-spoofing firmware

    Just downloaded the firmware upgrade for flash-based M8 modules from uBlox.
    Flashed it in no time.
    The result of UBX-MON-VER is now:



    So checked Galileo in CFG-GNSS:



    Result :)



    Incidentally, there is a "spoofing" flag now as well :O



    Don't dare trying this on M8T...

    by noreply@blogger.com (Michele Bavaro) at January 26, 2016 10:42 PM

    January 22, 2016

    Bunnie Studios

    Novena on the Ben Heck Show

    I love seeing the hacks people do with Novena! Thanks to Ben & Felix for sharing their series of adventures! The custom case they built looks totally awesome, check it out.

    by bunnie at January 22, 2016 04:37 PM

    January 21, 2016

    Bunnie Studios

    Name that Ware January 2016

    The Ware for January 2016 is shown below.

    I just had to replace the batteries on this one, so while it was open I tossed it in the scanner and figured it would make a fun and easy name that ware to start off the new year.

    by bunnie at January 21, 2016 03:37 PM

    Winner, Name that Ware December 2015

    The ware for December 2015 was a Thurlby LA160 logic analyzer. Congrats to Cody Wheeland for nailing it! email me for your prize. Also, thanks to everyone for sharing insights as to why the PCBs developed ripples of solder underneath the soldermask. Fascinating stuff, and now I understand why in PCB processing there’s a step of stripping the tin plate before applying the soldermask.

    by bunnie at January 21, 2016 03:37 PM

    January 19, 2016

    Free Electrons

    ELCE 2015 conference videos available

    ELC Europe 2015 logoAs often in the recent years, the Linux Foundation has shot videos of most of the talks at the Embedded Linux Conference Europe 2015, in Dublin last October.

    These videos are now available on YouTube, and individual links are provided on the elinux.org wiki page that keeps track of presentation materials as well. You can also find them all through the Embedded Linux Conference Europe 2015 playlist on YouTube.

    All this is of course a priceless addition to the on-line slides. We hope these talks will incite you to participate to the next editions of the Embedded Linux Conference, like in San Diego in April, or in Berlin in October this year.

    In particular, here are the videos from the presentations from Free Electrons engineers.

    Alexandre Belloni, Supporting multi-function devices in the Linux kernel

    Kernel maintainership: an oral tradition

    Tutorial: learning the basics of Buildroot

    Our CTO Thomas Petazzoni also gave a keynote (Linux kernel SoC mainlining: Some success factors), which was well attended. Unfortunately, like for some of the other keynotes, no video is available.

    by Michael Opdenacker at January 19, 2016 01:06 PM

    January 15, 2016

    Bunnie Studios

    Making of the Novena Heirloom

    Make is hosting a wonderfully detailed article written by Kurt Mottweiler about his experience making the Novena Heirloom laptop. Check it out!


    by bunnie at January 15, 2016 05:39 PM

    Free Electrons

    Device Tree on ARM article in French OpenSilicium magazine

    Our French readers are most likely aware of the existence of a magazine called OpenSilicium, a magazine dedicated to embedded technologies, with frequent articles on platforms like the Raspberry Pi, the BeagleBone Black, topics like real-time, FPGA, Android and many others.

    Open Silicium #17

    Issue #17 of the magazine has been published recently, and features a 14-pages long article Introduction to the Device Tree on ARM, written by Free Electrons engineer Thomas Petazzoni.

    Open Silicium #17

    Besides Thomas article, many other topics are covered in this issue:

    • A summary of the Embedded Linux Conference Europe 2015 in Dublin
    • Icestorm, a free development toolset for FPGA
    • Using the Armadeus APF27 board with Yocto
    • Set up an embedded Linux system on the Zynq ZedBoard
    • Debugging with OpenOCD and JTAG
    • Usage of the mbed SDK on a small microcontroller, the LPC810
    • From Javascript to VHDL, the art of writing synthetizable code using an imperative language
    • Optimization of the 3R strems decompression algorithm

    by Thomas Petazzoni at January 15, 2016 09:16 AM

    Free Electrons at FOSDEM and the Buildroot Developers Meeting

    FOSDEM 2016The FOSDEM conference will take place on January 30-31 in Brussels, Belgium. Like every year, there are lots of interesting talks for embedded developers, starting from the Embedded, Mobile and Automotive Devroom, but also the Hardware track, the Graphics track. Some talks of the IoT and Security devrooms may also be interesting to embedded developers.

    Thomas Petazzoni, embedded Linux engineer and CTO at Free Electrons, will be present during the FOSDEM conference. Thomas will also participate to the Buildroot Developers Meeting that will take place on February 1-2 in Brussels, hosted by Google.

    by Thomas Petazzoni at January 15, 2016 08:52 AM

    January 14, 2016

    Free Electrons

    Linux 4.4, Free Electrons contributions

    Linux 4.4 is the latest releaseLinux 4.4 has been released, a week later than the normal schedule in order to allow kernel developers to recover from the Christmas/New Year period. As usual, LWN has covered the 4.4 cycle merge window, in two articles: part 1 and part 2. This time around, KernelNewbies has a nice overview of the Linux 4.4 changes. With 112 patches merged, we are the 20th contributing company by number of patches according to the statistics.

    Besides our contributions in terms of patches, some of our engineers have also become over time maintainers of specific areas of the Linux kernel. Recently, LWN.net conducted a study of how the patches merged in 4.4 went into the kernel, which shows the chain of maintainers who pushed the patches up to Linus Torvalds. Free Electrons engineers had the following role in this chain of maintainers:

    • As a co-maintainer of the Allwinner (sunxi) ARM support, Maxime Ripard has submitted a pull request with one patch to the clock maintainers, and pull requests with a total of 124 patches to the ARM SoC maintainers.
    • As a maintainer of the RTC subsystem, Alexandre Belloni has submitted pull requests with 30 patches directly to Linus Torvalds.
    • As a co-maintainer of the AT91 ARM support, Alexandre Belloni has submitted pull requests with 46 patches to the ARM SoC maintainers.
    • As a co-maintainer of the Marvell EBU ARM support, Gregory Clement has submitted pull requests with a total of 33 patches to the ARM SoC maintainers.

    Our contributions for the 4.4 kernel were centered around the following topics:

    • Alexandre Belloni continued some general improvements to support for the AT91 ARM processors, with fixes and cleanups in the at91-reset, at91-poweroff, at91_udc, atmel-st, at91_can drivers and some clock driver improvements.
    • Alexandre Belloni also wrote a driver for the RV8803 RTC from Microcrystal.
    • Antoine Ténart added PWM support for the Marvell Berlin platform and enabled the use of cpufreq on this platform.
    • Antoine Ténart did some improvements in the pxa3xx_nand driver, still in preparation to the addition of support for the Marvell Berlin NAND controller.
    • Boris Brezillon did a number of improvements to the sunxi_nand driver, used for the NAND controller found on the Allwinner SoCs. Boris also merged a few patches doing cleanups and improvements to the MTD subsystem itself.
    • Boris Brezillon enabled the cryptographic accelerator on more Marvell EBU platforms by submitting the corresponding Device Tree descriptions, and he also fixed a few bugs found in the driver
    • Maxime Ripard reworked the interrupt handling of per-CPU interrupts on Marvell EBU platforms especially in the mvneta network driver. This was done in preparation to enable RSS support in the mvneta driver.
    • Maxime Ripard added support for the Allwinner R8 and the popular C.H.I.P platform.
    • Maxime Ripard enabled audio support on a number of Allwinner platforms, by adding the necessary clock code and Device Tree descriptions, and also several fixes/improvements to the ALSA driver.

    The details of our contributions for 4.4:

    by Thomas Petazzoni at January 14, 2016 02:32 PM

    January 13, 2016

    Michele's GNSS blog

    NT1065 review

    So I finally came about testing the NT1065… apologies for the lack of detail but I have done this in my very little spare time. Also, I would like to clarify that I am in no way affiliated to NTLab.

    Chip overview

    A picture speaks more than a thousand words.
    Figure 1: NT1065 architecture
    Things worth noting above are:
    • Four independent input channels with variable RF gain, so up to 4 distinct antennas can be connected;
    • Two LOs controlled by integer synthesizers, one per pair of channels, tuned respectively for the high and low RNSS band, but one can choose to route the upper LO to the lower pair and have 4 phase coherent channels
    • ADC sample rate derived from either LO through integer division
    • 4 independent image-reject mixers, IF filters and variable gain (with AGC) paths
    • Four independent outputs, either as a CMOS two bit ADC or analogue differential so one could
      • connect his/her own ADC or
      • phase-combine the IF outputs in a CRPA fashion prior to digitisation
    • standard SPI port control
    Another important point for a hardware designer (I used to be a little bit of that) is this:
    Figure 2: NT1065 application schematic
    The pin allocation shows a 1 cm2 QFN88 (with 0.4mm pin step) with plenty of room between the pins and an optimal design for easy routing of the RF and IF channels. Packages like that aren’t easy to find nowadays for such complex RF ICs (everything is a BGA or WLCSP) but I love QFNs because they are easy to solder with a bit of SMD practice and can be “debugged” if the PCB layout is not perfect first time.

    Evaluation kit overview

    The evaluation kit presents itself like this:
    Figure 3: NT1065 evaluation kit
    One can see the RF inputs at the top, the external reference clock input on the left, the control interface on the right and the IF/digital part on the bottom. The large baluns (for differential to single ended conversion) were left unpopulated for me as I don’t use redpitaya (yet?). The control board is the same used for the NT1036.
    In configured the evaluation kit to be powered by the control board (it was an error, see later) and connected the ADC outputs and clock to the Spartan6 on the SdrNav40, used here simply as USBHS DAQ. In total, there is one clock like and 8 data lines (4 pairs of SIGN/MAGN, one per channel).
    The IF filters act on the Lower Side Band (LSB) or the Upper Side Band (USB) for respectively high and low injection mixing and can be configured for a cutoff frequency between 10 and 35 MHz. Thus, bandwidths of up to 30 MHz per signal can be accommodated and the minimum ADC sampling rate should be around 20 Msps. 20 MByte/sec are not easy to handle for a USBHS controller, so I will look into other more suitable  (but still cost effective) DAQ options to evaluate the front-end. In the meantime, I could do a lot with 32MByte/sec of the FX2LP by testing either 2 channels only with 2 bit or all the 4 channels with 1 bit and compressing nibbles into bytes (halving the requested rate).
    The evaluation software is a single window, very simple and intuitive to use but very effective.
    Figure 4: Evaluation software
    The software comes with several sample configuration files that can be very useful to quickly start evaluating the chip.

    Tests

    All my tests used a good 10MHz CMOS reference.

    GPS L1

    The first test was GPS L1 in high injection mode setting the first LO to 1590 MHz (R1=1, N1=159), leading to an IF of -14.58 MHz, a filter bandwidth of about 28 MHz and a sampling frequency of 53 Msps (K1/2=15). I streamed one minute to the disk and verified correct operation.
    Figure 5:GPS L1 PSD (left) and histogram+time series (right)
    Figure 6: G30 correlation of L1 code detail (left) and all satellites (right).

    GPS L1/L5

    When performing this test I bumped into a hardware problem. If the control board powers the NT1065 evaluation kit with its internal 3.3V reference, the power line is gated by a small resistor thus the voltage depends on the current drawn by the chip (undesirable!). Enabling the second channel in the GUI made the chip draw more current so the voltage on the evaluation kit decreased away from the SdrNav40 one which was steady at 3.3V. Level mismatch created unreliability in reading the digital levels and failure to transfer meaningful data. So I powered the evaluation kit with the SdrNav40 3.3V voltage reference and everything was happy again.
    In this configuration L1 is again at -14.58 MHz (1590 MHz high side injection) and L5 is on the third channel (low RNSS) at -13.55 MHz (R2=1, N2=119 for 1190 MHz high side injection). To be noted the relative large spike in the spectrum at 1166 MHz, not an obvious harmonic so it could be some unwanted emission from neighbour equipment.
    Figure 7: L5 PSD (left) and histogram+time series (right)
    Figure 8: G30 correlation of L5 code detail (left) and all satellites (right).
    Interestingly, the Matlab satellite search algorithm returns respectively for L1 and L5:
    Searching GPS30 -> found: Doppler +4500.0 CodeShift:  35226 xcorr: 12502.4
    Searching GPS30 -> found: Doppler +3000.0 CodeShift:  35226
    The above outputs show coarse but correctly scaled Doppler [Hz] and a perfect match in code delay [samples] (just by chance spot on).

    4x GPS L1

    In this case I enabled all 4 channels and shared the LO amongst them all. Unfortunately I cannot show the 6dB increase in gain when steering a beam to a satellite as all RF inputs were connected to the same antenna and -being the noise the same- steering the phase is useless. However, it is possible to verify how the phase amongst the channels is perfectly coherent (requirement for an easy CRPA).
    The signals were conveniently brought to baseband, filtered and decimated by 5, resulting into a 10.6 MHz sampling rate. As one can see below the power was well matched and the inter-channel carrier phase is extremely steady and constant over the 60 seconds capture time. In the very case of zero-baseline, one can easily check that such phase difference is also the same across different satellites (as it does not depend on geometry but just on different path lengths beyond the splitter).
    Figure 9:PSD of the IF obtained from the 4 channels and relative carrier phase

    GPS L1 + Glonass G1 + GPS L5 + Glonass G3

    I wanted to verify here reception of Glonass G1 on the second channel (upper side band). At this point it had become merely a formality. Glonass CH0 is at +12 MHz so the acquisition returned correctly as shown below.  Of course 53 Msps for a BPSK(0.5) is a bit of an overkill :)
    Figure 10: Glonass acquisition all satellites (left) and CH-5 detail (right).

    GPS L1 + Beidou B1 + GPS L5 + Galileo E5b

    The case for GPS and Beidou was a bit more challenging as the distance between L1 and B1 is only 14.322 MHz, thus the IFs must be around 7 MHz. I decided to set the LO to 1570 MHz (R1=1, N1=157). So GPS went upper side band on channel 1 at +5.42 MHz IF. Beidou consequently went on lower side band on channel 2 at -8.902 MHz. Channel 3 and 4 were enabled with LO2 set at 1190 MHz: in the middle between E5a and E5b in order to verify AltBOC reception.
    As 1570 MHz is a nasty frequency to generate a round sampling frequency value I decide to derive the clock from LO2 using K2/2 = 10 and therefore stream at 59.5 Msps. As one can see below the L1 peak has moved very close to baseband now and the sampling frequency is quite exceeding the Nyquist limit.
    Figure 11: GPS acquisition with close-in IF
    Figure 12: Beidou B1 sprectrum (MSS on the right) and acquisition (incidentally also showing IGSO generation 3 satellites C31 and C32).
    Figure 13: E5a acquisition of E30
    Figure 14: E5b acquisition of E30, showing a perfect match in code delay with E5a as one would expect.

    Conclusions and work to do

    I am very suprised of how little time took me from unboxing the kit to sucessfully using it to acquire all the GNSS signals I could think and test all configurations. Of course I had the former experience with the NT1036 but this time I had the perception of a solid, feature-rich, plug-and-play IC.
    In my todo list there is the extension of this post with a home-made measurement of channel isolation.. and the way I plan to do it should be interesting to the readers :)

    by noreply@blogger.com (Michele Bavaro) at January 13, 2016 09:49 PM

    January 11, 2016

    Altus Metrum

    Altos1.6.2

    AltOS 1.6.2 — TeleMega v2.0 support, bug fixes and documentation updates

    Bdale and I are pleased to announce the release of AltOS version 1.6.2.

    AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, STMF042, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

    This is a minor release of AltOS, including support for our new TeleMega v2.0 board, a small selection of bug fixes and a major update of the documentation

    AltOS Firmware — TeleMega v2.0 added

    The updated six-channel flight computer, TeleMega v2.0, has a few changes from the v1.0 design:

    • CC1200 radio chip instead of the CC1120. Better receive performance for packet mode, same transmit performance.

    • Serial external connector replaced with four PWM channels for external servos.

    • Companion pins rewired to match EasyMega functionality.

    None of these change the basic functionality of the device, but they do change the firmware a bit so there's a new package.

    AltOS Bug Fixes

    We also worked around a ground station limitation in the firmware:

    • Slow down telemetry packets so receivers can keep up. With TeleMega v2 offering a fast CPU and faster radio chip, it was overrunning our receivers so a small gap was introduced between packets.

    AltosUI and TeleGPS applications

    A few minor new features are in this release

    • Post-flight TeleMega and EasyMega orientation computations were off by a factor of two

    • Downloading eeprom data from flight hardware would bail if there was an error in a data record. Now it keeps going.

    Documentation

    I spent a good number of hours completely reformatting and restructuring the Altus Metrum documentation.

    • I've changed the source format from raw docbook to asciidoc, which has made it much easier to edit and to use docbook features like links.

    • The css moves the table of contents out to a sidebar so you can navigate the html format easily.

    • There's a separate EasyMini manual now, constructed by taking sections from the larger manual.

    by keithp's rocket blog at January 11, 2016 05:03 AM

    January 03, 2016

    Harald Welte

    Conferences I look forward to in 2016

    While I was still active in the Linux kernel development / network security field, I was regularly attending 10 to 15 conferences per year.

    Doing so is relatively easy if you earn a decent freelancer salary and are working all by yourself. Running a company funded out of your own pockets, with many issues requiring (or at least benefiting) from personal physical presence in the office changes that.

    Nevertheless, after some years of being less of a conference speaker, I'm happy to see that the tide is somewhat changing in 2016.

    After my talk at 32C3, I'm looking forward to attending (and sometimes speaking) at events in the first quarter of 2016. Not sure if I can keep up that pace in the following quarters...

    FOSDEM

    FOSDEM (http://fosdem.org/2016) a classic, and I don't even remember for how many years I've been attending it. I would say it is fair to state it is the single largest event specifically by and for community-oriented free software developers. Feels like home every time.

    netdevconf 1.1

    netdevconf (http://www.netdevconf.org/1.1/) is actually something I'm really looking forward to. A relatively new grass-roots conference. Deeply technical, and only oriented towards Linux networking hackers. The part of the kernel community that I've known and loved during my old netfilter days.

    I'm very happy to attend the event, both for its technical content, and of course to meet old friends like Jozsef, Pablo, etc. I also read that Kuninhiro Ishiguro will be there. I always adored his initial work on Zebra (whose vty code we coincidentally use in almost all osmocom projects as part of libosmovty).

    It's great to again see an event that is not driven by commercial / professional conference organizers, high registration fees, and corporate interests. Reminds me of the good old days where Linux was still the underdog and not mainstream... Think of Linuxtag in its early days?

    Linaro Connect

    I'll be attending Linaro Connect for the first time in many years. It's a pity that one cannot run various open source telecom protocol stack / network element projects and a company and at the same time still be involved deeply in Embedded Linux kernel/system development. So I'll use the opportunity to get some view into that field again - and of course meet old friends.

    OsmoDevCon

    OsmoDevCon is our annual invitation-only developer meeting of the Osmocom developers. It's very low-profile, basically a no-frills family meeting of the Osmocom community. But really great to meet with all of the team and hearing about their respective experiences / special interest topics.

    TelcoSecDay

    This (https://www.troopers.de/events/troopers16/580_telcosecday_2016_invitation_only/) is another invitation-only event, organized by the makers of the TROOPERS conference. The idea is to make folks from the classic Telco industry meet with people in IT Security who are looking at Telco related topics. I've been there some years ago, and will finally be able to make it again this year to talk about how the current introduction of 3G/3.5G into the Osmocom network side elements can be used for security research.

    by Harald Welte at January 03, 2016 11:00 PM

    January 01, 2016

    Michele's GNSS blog

    Happy begin of 2016

    2015 just passed. I don't write much here anymore as time has become a very precious resource and my job imposes tight limitations on what one can or cannot write on the web.
    The yearly update will quickly cover constellation the status, some info on low cost RTK developments and some more SDR thoughts (although the most significant article in that respect will come soon in another post).

    Constellation updates


    As retrieved from Tomoji Takasu's popular diary, 2015 has seen the following launches:

    Date/Time (UTC)     Satellite             Orbit   Launcher        Launch Site               Notes
    2015/03/25 18:36    GPS Block IIF-9       MEO     Delta-IV        Cape Canaveral, US        G26
    2015/03/27 21:46    Galileo FOC-3, 4      MEO     Soyuz ST-B      Kourou, French Guiana     E26, E22
    2015/03/28 11:49    IRNSS-1D              IGSO    PSLV            Satish Dhawan SC, India   111.75E
    2015/03/31 13:52    BeiDou-3 I1           IGSO    Long March 3C   Xichang, China            C15
    2015/07/15 15:36    GPS Block IIF-10      MEO     Atlas-V         Cape Canaveral, US        G08
    2015/07/25 12:28    BeiDou-3 M1-S, M2-S   MEO     Long March 3B   Xichang, China            ?
    2015/09/10 02:08    Galileo FOC-5, 6      MEO     Soyuz ST-B      Kourou, French Guiana     E24, E30
    2015/09/29 23:23    BeiDou-3 I2-S         IGSO    Long March 3B   Xichang, China            ?
    2015/10/30 16:13    GPS Block IIF-11      MEO     Atlas-V         Cape Canaveral, US        G10
    2015/11/10 21:34    GSAT-15 (GAGAN)       GEO     Ariane 5        Kourou, French Guiana     93.5E
    2015/12/17 11:51    Galileo FOC-8, 9      MEO     Soyuz ST-B      Kourou, French Guiana     E??, E??


    GPS 

    GPS replaced three IIA birds with brand new IIF, as one can see Figure 1. The number of GPS satellites transmittiing L5 raised now to 11 (as one can also verify with UNAVCO). The number of GPS with L2C is instead 18 (quite close to a nominal constellation!). The question is now how GPS will proceed in 2016 and beyond, having seen the delays that afffect OCX and in general the bad comments (see e.g. 1 and 2) on the progress of modernisation of GPS.
    Figure 1: One year of GPS observations, obtained using a bespoke tool from the freely available data courtesy of the IGS network.
    Glonass

    Stable situation here, as seen in Figure 2, with the only exception of PRN 17 going offline in mid-October (perhaps soon to be replaced according to the table of upcoming launches)
    Figure 2: One year of Glonass observations
    Galileo

    The situation has been very "dynamic" for Galileo but is indeed very promising as seen in Figure 3. The latest launch went well and we can hope for several signals in space in 2016: hopefully the year that Galileo will make its appeareance in most consumer devices. Incidentally, there are as of today 8 satellites broadcasting E5a.




    Beidou 

    Also for Beidou the situation is rapidly evolving as can be seen in Figure 4. My colleague James and I did a detailed study on the new generation satellites and published part of it on GPSWorld. Indeed 3rd generation test birds host a very versatile payload that allows them to broadcast modern navigation signals on three frequencies. Incidentally C34 and C33 (the two MEO space vehicles) also broadcast a QPSK on E5a.
    Figure 4: One year of Beidou observations.

    Low cost RTK

    An awful lot of progress here, with NVS, Skytraq, Geostar Navigation and uBlox releasing multi-constellation single frequency products for RTK.

    NVS released two products with onboard GPS+Glonass (upgradeable to Galileo) RTK engine: NV08C-RTK (for standard base-rover configuration) and NV08C-RTK-A (with added dual antenna heading determination for precision AG). Rumors say that they both run an highly reworked version of RTKLIB on a LPC32xx microcontroller (ARM926EJ-S processor with VFP unit). The price is not public, but again rumors suggest it is a few hundreds of EUR a piece (in small quantities) for the single receiver version. I got my hands on a couple of boards and build a simple adapter board to be able to use them with a standard laptop and a wireless module fitting the Xbee socket (including this one).



    Skytraq has built on its Navspark initiative and came out with two groundbreaking products S2525F8-RTK
    and S2525F8-BD-RTK. The -I shall say- provocative price of 50 and 150 USD respectively sets a new threshold very hard to beat. Skytraq has also done extensive analysis on the performance of GPS only versus GPS+Beidou single frequency RTK, e.g. here and here. In Asia the dual constellation (2x CDMA) single frequency (1540x and 1526x f0)  RTK shows incredibly promising results, mainly due to the impressive number of birds in view. I got my hands on a couple of plug&play evaluation kits and already verified the sub-minute convergence time to fix in zero baseline and good visibility conditions.



    Geostar Navigation has also recently released the GeoS-3MR which is practically identical in terms of capability to the GeoS-3 and GeoS-3M, but has a factory setting such that the most recent firmware provides carrier phase for both GPS and Glonass. Although Glonass phase is not calibrated, last month statements from Tomoji suggest that this feature could be incorporated in v2.4.3 anyway.
    A few years ago I had designed and produced some carrier boards for GeoS-3M so I could just place an order for a few raw-capable chips (at 25 USD each) and test them out. The software provided by the manufacturer (Demo3 and toRNX) allows to extract Rinex observations from the binary logs. At the time I had also developed some parser code for RTKLIB but I now found out that it has a small issue.. I don't feel like reinstalling C++ Builder just to fix it but anyone please feel free to take that code and push it to v2.4.3.


     
    uBlox released the M8T module with raw data support for two simultaneous constellations.. very interesting chip but I have the feeling that some big change is going to happen there since the Company is focussing much more on comms than nav lately.

    ComNav offers the K500 OEM board also for less than 300 EUR in small quantities.

    In view of all the above, one could expect that initiatives like Reach® and Piksi® will surely have to reconsider their approach. In particular, things based on Edison® are facing the competition of ARM-based modules which are perfectly capable of RTK and are accessible at a much lower price (e.g. see Raspberry Pi Zero and C.H.I.P.  SwiftNav has recently release an update but unless they go multi-frequency rapidly the competition will give them very hard times.

    Finally, low cost dual frequency cards such as Precis-L1L2 have started to appear. Apparently based on a Chinese Unicorecomm OEM board it offers multi-constellation multi-frequency RTK at 800 USD.

    SDR

    Over the holydays I assembled the test-bench for the NT1065, the latest multi-constellation front-end from NTLAB. The setup again is very clean and builds on lessons learnt with the NT1036: I will present the first results soon, in the next post.


    Since the chip has the native capability of streaming about 60 MBytes/sec (4 channels ~15 MHz IF output at 2-bit per sample) a USB2.0 transceiver is sub-optimal as limited to about 40 Mbytes/sec.
    I started investigating the FT601 USB3.0 trasceiver from FTDI and the KSZ9031RNX GigETH transceiver from Micrel, as seen in the beautiful development from Peter Monta. Also, the availability of the FX3 Explorer Kit is tempting as easy mid-step solution. There are many SDR boards, but I would just need a cheap programmable FPGA+GigETH/USBSS and I cannot find it... Parallella seems the best candidate with its Porcupine to use and some software to develop of course (I am surprised nobody published a GPIO-GigEth streamer software with Parallella yet). Ettus and Avnet are much ahead with powerful SDR platforms (e.g. the B210 and the picoZed SDR SOM) but there is what feels a steep lerning curve to use them. Perhaps it is time again to go design something?
    In the meantime, I am watching the pcDuino3 Nano Lite and the Odroid XU4 as cheap NAS solutions to efficiently store long snapshots of IF data.

    by noreply@blogger.com (Michele Bavaro) at January 01, 2016 10:11 PM

    December 30, 2015

    Harald Welte

    32C3 is over, GSM and GPRS was running fine, osmo-iuh progress

    The 32C3 GSM Network

    32C3 was great from the Osmocom perspective: We could again run our own cellular network at the event in order to perform load testing with real users. We had 7 BTSs running, each with a single TRX. What was new compared to previous years:

    • OsmoPCU is significantly more robust and stable due to the efforts of Jacob Erlbeck at sysmocom. This means that GPRS is now actually still usable in severe overload situations, like 1000 subscribers sharing only very few kilobits. Of course it will be slow, but at least data still passes through as much as that's possible.
    • We were using half-rate traffic channels from day 2 onwards, in order to enhance capacity. Phones supporting AMR-HR would use that, but then there are lots of old phones that only do classic HR (v1). OsmoNITB with internal MNCC handler supports TCH/H with HR and AMR for at least five years, but the particular combination of OsmoBTS + OsmoNITB + lcr (all master branches) was not yet deployed at previous CCC event networks so far.

    Being forced to provide classic HR codec actually revealed several bugs in the existing code:

    • OsmoBTS (at least with the sysmoBTS hardware) is using bit ordering that is not compliant to what the spec says on how GSM-HR frames should be put into RTP frames. We didn't realize this so far, as handing frames from one sysmoBTS to another sysmoBTS of course works, as both use the same (wrong) bit ordering.
    • The ETSI reference implementation of the HR codec has lots of global/static variables, and thus doesn't really support running multiple transcoders in parallel. This is however what lcr was trying (and needing) to do, and it of course failed as state from one transcoder instance was leaking into another. The problem is simple, but the solution not so simple. If you want to avoid re-structuring the entire code in very intrusive ways or running one thread per transcoder instance, then the only solution was to basically memcpy() the entire data section of the transcoding library every time you switch the state from one transcoder instance to the other. It's surprisingly difficult to learn the start + size of that data section at runtime in a portable way, though.

    Thanks to our resident voice codec expert Sylvain for debugging and fixing the above two problems.

    Thanks also to Daniel and Ulli for taking care of the actual logistics of bringing + installing (+ later unmounting) all associated equipment.

    Thanks furthermore to Kevin who has been patiently handling the 'Level 2 Support' cases of people with various problems ending up in the GSM room.

    It's great that there is a team taking care of those real-world test networks. We learn a lot more about our software under heavy load situations this way.

    osmo-iuh progress + talk

    I've been focussing basically full day (and night) over the week ahead of Christmas and during Christmas to bring the osmo-iuh code into a state where we could do a end-to-end demo with a regular phone + hNodeB + osmo-hnbgw + osmo-sgsn + openggsn. Unfortunately I only got it up to the point where we do the PDP CONTEXT ACTIVATION on the signalling plane, with no actual user data going back and forth. And then, for strange reasons, I couldn't even demo that at the end of the talk. Well, in either case, the code has made much progress.

    The video of the talk can be found at https://media.ccc.de/v/32c3-7412-running_your_own_3g_3_5g_network#video

    meeting friends

    The annual CCC congress is always an event where you meet old friends and colleagues. It was great talking to Stefan, Dimitri, Kevin, Nico, Sylvain, Jochen, Sec, Schneider, bunnie and many other hackers. After the event is over, I wish I could continue working together with all those folks the rest of the year, too :/

    Some people have been missed dearly. Absence from the CCC congress is not acceptable. You know who you are, if you're reading this ;)

    by Harald Welte at December 30, 2015 11:00 PM

    Video Circuits

    RTL TV 40 ANS - Le Hit- Parade, Featuring EMS Spectron

    A little snippet here of classic TV graphics from an anniversary on RTL, which includes some video mixer feedback effects and some very familiar EMS Spectron/Spectre shapes, blink and you will miss them!



    "La télévison Luxembourgeoise a célébré ses quarantes ans en 1995.Voici un extrait de la soirée qui c'est déroulée à la Villa Louvigny."




    by Chris (noreply@blogger.com) at December 30, 2015 12:10 AM

    December 26, 2015

    Harald Welte

    32C3: Running your own 3G/3.5G cellular network

    Today I had the pleasure of presenting at 32C3 about Running your own 3G/3.5G cellular network. The tutorial covers the ongoing effort of creating a HNB-GW and Iuh/IuCS/IuPS support as part of the Osmocom project.

    The video recording is available from https://media.ccc.de/v/32c3-7412-running_your_own_3g_3_5g_network

    Slides are available at http://git.gnumonks.org/index.html/laforge-slides/plain/2015/osmo_iuh/osmo_iuh.pdf

    by Harald Welte at December 26, 2015 11:00 PM

    December 21, 2015

    Bunnie Studios

    Name that Ware December 2015

    The Ware for December 2015 is shown below.

    This ware got me at “6502”. Thanks to DavidG Cape Town for contributing this specimen!

    One question for the readers (separate from naming the ware!), it’s been something I’ve wondered about for decades. On the back side of this board, one can see ripples on the fatter traces. My original assumption is this is due to a problem with hot air leveling after the application of a solder finish to the bare copper board, before the soldermask is applied. However, the top side is almost entirely smooth, so clearly the process can supply a flatter finish.

    So here’s my quandary: are the ripples intentional (for example, an attempt to increase current capacity by selectively thickening fat traces with a solder coating), or accidental (perhaps microscopic flaws in the soldermask allowing molten metal to seep under the soldermask during wave soldering)?

    Been wondering about this since I was like 15 years old, but never got around to asking anyone…

    Happy holidays to everyone! I’ll be at 32C3 (thankfully I have a ticket), haunting the fail0verflow table. Come enjoy a beer with me, I’m not (officially) giving any talks so I can actually sit back and enjoy the congress this year.

    by bunnie at December 21, 2015 08:16 PM

    Winner Name that Ware November 2015

    The Ware for November 2015 was an RS-482 interface picomotor driver of unknown make and model, but probably similar to one of these. It’s designed to drive piezo (slip stick) motors; the circuits on board generate 150V waveforms at low current to drive a linear actuator with very fine positional accuracy.

    This one was apparently a stumper, as several guessed it had something to do with motor control or positioning, but nobody put that together with the high voltage rated parts (yet with no heatsinking, so driving low currents) on the board to figure that it’s meant for piezo or possibly some other electrostatic (e.g. MEMS) actuators. Better luck next month!

    by bunnie at December 21, 2015 08:16 PM

    Elphel

    X3D assemblies from any CAD

    Converting mechanical assemblies to X3D models from STEP (ISO 10303) files

    Like all manufacturing companies we use mechanical CAD program to design our products. We would love to use Free Software programs for that, but so far even FreeCAD has a warning on their download page “FreeCAD is under heavy development and might not be ready for production use”. We have to use proprietary tools, our choice was the program that natively runs on GNU/Linux we use on our computers. This program generates STEP files that we can send to virtually any machine shop (locally or overseas) and expect to receive the manufactured parts that match our design. For the last 6 years we kept the CAD models for all the camera parts on Elphel Wiki hoping they might be needed not only by the machine shops we order parts from, but also by our users to incorporate (or modify) our products in their systems.

    All the mechanical CAD programs can export STEP, we can use this format for assemblies

    The STEP file export is quite adequate for the production, but it would be convenient for our users (including ourselves) to be able to easily navigate through the complex assemblies. Theoretically STEP can handle assemblies too, but I’ve got an impression that the CAD program owners are not that interested in the interoperability – they want everybody to use their program, and the interoperability scope is limited to a simplified scheme: CAD(their) -> CAM(any) and the assembly structure is often lost when generating output files. When we tried to export Eyesis4π camera as a STEP file it got more than 0.5GB in size and when imported (even by the same program) it resulted in over 1800 solids without any hierarchy or even the part names. Additionally the colors were lost when the STEP file was imported back and it is understandable – CAD programs need to be able to produce STEP files (otherwise they would be completely useless), but importing requirements are more relaxed. Having no control over the proprietary program output we had to find a way to use the CAM files (in STEP format) in the other way than the CAD providers intended and recreate the assembly structure ourselves.

    FreeCAD as the environment for model conversion

    FreeCAD seemed to us as a best choice for the next step regardless to its “not ready for production” status as it has a great advantage of being FLOSS, and having excellent support for Python access to the functionality (through macros and a nice Python console). First I looked for a possibility to export data as X3D and was impressed that a FreeCAD macro that does that – export_x3d.py has less than 100 lines of code. It did not export colored faces of electronic components on the PCB, but that was something we could definitely fix ourselves.

    Having working color output was a first step to a more ambitious project – feed the program with a library of STEP files of components and a flat STEP assembly file. The program should recognize each of the objects in the assembly by comparing it with the known parts, replace them with references to the library parts and provide translation and rotation. There are multiple ways how to deal with this task and I will describe what we did later in the post, in short – it just worked. We fed the program with a library of 800+ part files that we had (some custom, some just standard fasteners from McMaster), and the assembly file and it recognized almost all of the objects and correctly placed them, so Oleg Dzhimiev was able to start working on the viewer to navigate the models using the x3dom technology while I continued working on the converter.

    Links to the converted models

    Here is a link to Elphel Wiki page Elphel camera assemblies. Tis page opens multiple designs – they include the new NC393 camera models (for which we do not yet received all the mechanical parts) as well as our current products for which we already had the needed CAD files.

    We had not tried to convert design data exported by other mechanical CAD software and it is interesting to know if this program can help users of other CAD systems. We tried to make it agnostic to the source of the STEP files, but it does require the possibility to export files with specified color of the faces (AP214 has this possibility while AP203 does not). Color information is anyway needed as a proxy for materials/finish to distinguish between different parts that have exactly the same geometry, we also use it to hint orientation of the parts in the assembly.

    There are multiple ways how the program can be improved, but at least for our project it is already usable. And we hope it is not just for us.

    Technical details

    As soon as we verified that FreeCAD can import our STEP files and it is not that difficult to generate the X3D models we started freecad_x3d project at Github. The x3d_step_assy.py macro runs in FreeCAD and generates X3D files from the STEP input, the rest of the repository is the viewer for the produced models.

    Indexing the STEP part files

    The first thing the program does is it scans and indexes all the STEP models of the parts, saving the information that is needed for matching to the assembly objects. STEP opening in FreeCAD is a very slow process (especially in the GUI mode that is required to have access to the object color information), so this step is needed to significantly speed-up subsequent assembly files processing. The part invariant information such as center of gravity (center of volume to be precise) location, volume, surface area and gyration radii provided by FreeCAD. If the part has differently colored faces the centers for each color is recorded too. Additionally a list of up to 18 vertex coordinates is calculated and added – these vertices are tested to be inside (or near to) the objects in the assembly. Currently these vertices are selected as having maximal and minimal values for each of the 3 coordinates as well as their sums and differences.

    Normally each part model consists of just one solid object, but in practice it is not always the case. The CAD program we use generates extra “tube” object for each thread, sometimes we do it intentionally like making a two-solid photographic UV protection filter as a frame and a glass. This allows us to selectively change solid/wireframe state when working in CAD program. Current implementation saves information about each solid in a part and places the largest (now by volume) solid first (at index 0), the matching uses only the first solid, and that leads to false-positive in reporting of the objects that do not have any matches to parts. “False” – because these unmatched objects will still appear in the X3D model as their are included in the individual part models. Removing such false positive objects from the report is definitely possible, but it was not a big hassle to manually inspect them in the FreeCAD 3D-view.

    All this information is recorded in Python pickle format, one file for each STEP file. When program needs to process an assembly, it first verifies that each STEP part file has a corresponding pickle one an (re)calculates the ones that are either missing or outdated (older than the STEP model).

    Generation of the X3D files for each part model

    Next step after indexing of the STEP models of the parts is to generate individual parts in X3D format. Program uses the color information that exists after import in GUI mode for each object face and uses it in the generation of the X3D XML data. It wraps each object with X3D “Group” node to combine multiple possible objects in a part and to provide a bounding box information, and then adds the outmost “Transform” node with zero translation and rotation – it can be used for the viewer program to move rotate the object. Currently the viewer reads group bound box center and moves the top object in the opposite direction for convenient rotation. The imported STEP files may have large offsets of the models from the (0,0,0) point, if this is not corrected the viewer may try to rotate the object around the point that is far off-screen.

    Similarly to the generation of the pickle files, program only generates part X3D models if they do not exist or are older than the input STEP files. We noticed that at this stage FreeCAD often segfaults (regardless of the version) and it seems to be related to the GUI. Luckily you only have to load this many files once, and if the FreeCAD crashes you may just restart it and the macro will continue generation of the new files.

    Selection of the parts candidates for the assembly objects

    Opening a complex assembly as STEP file in FreeCAD can take a while (one of our models was opening 40 minutes), so please be patient. The part matching take twice less time, so the program offers two options – use the currently active document in FreeCAD or start from the file path and open it.

    When all the assembly data is available, the program indexes each object extracting parameters similar to those of the parts – volume, area, inertial properties, centers of each color (if present). Then it uses this data to create a list of parts-candidates for each assembly object, requiring that the orientation-invariant parameters of each object exported as a part of the assembly matches (to the configurable precision) that of the same part exported individually. If colors are available, the total area of each color is compared too, but match is allowed if only the shape is the same as CAD may allow to change the object color in the assembly making it different from that of the library part. If several parts match the assembly object then the better color match disqualifies other shape-only candidates, so it is possible to color-code the same-shape parts.

    Matching of the assembly object to the part orientation

    Next step of the assembly to parts decomposition is to determine the part position and orientation to match the assembly objects. In most cases there will be no more that one candidate for each object, but if there are several the program will try them all and use the first match. It is very easy to find the translation of the part – just use the vector between the already known centers of volumes, but it is more tricky to find the correct orientation. There are multiple ways how to match orientations, and the program can be definitely improved. We chose rather simple approach that requires modification of some parts, but is rather easy as the parts models are created by us. The number of parts that required modification is rather small, this modification has to be done once per part (not for each assembly) and the modification does not invalidate the model for CAM usage.

    This approach uses the offsets of the “centers of gravity” of the faces of each color (even a single-colored object may have the center of all faces offset from the center of volume) and then the principal axes of gyration that are provided by FreeCAD. Color offsets are used first, then supplemented by the gyration axes, each step verifies that the vector is non-zero and the next one is not co-linear to the first. Only two orthogonal vectors are needed, the third one needed for rotational matrix is calculated as a cross product of the first two. Use of gyration axes even if all 3 have different gyration radii ad so are reliably calculated have ambiguity as they do not provide the sign, only the line of direction. The same asymmetrical object can be oriented in 4 different ways (alternating the sign of the two of the 3 axes) and the program tries each of them. Initially I tried to compare the volume of boolean intersection of the two objects that should be the same as the volume of a single object if they match, but for some of our STEP models FreeCAD refused to calculate intersection, so I used isinside() function instead that calculates if a given point is inside the object to the certain precision so can be used to verify that all of the set of vertices saved for the part object with the transformation matrix applied end up “inside” the assembly object (actually on the border). Unfortunately even that had exception – in one of the object one vertex was returning “False” with any tolerance, even larger that the object size. In that rare case the program tries to move the test point around by the same precision-long vector, and that modification worked, FreeCAD return “True” for the isinside() call.

    When the color hints are required in the part models

    Using just the gyration principal axes fails when the object has some symmetry (point or axis). Consider a regular socket head screw. Unless it is a really short one it will have one small and two equal large gyration radii, and the axis for the small gyration radius can be reliably found (it is just the regular axis of the screw) but the two perpendicular ones are arbitrary and may be different for the part and the assembly object. That will lead that the hex head will have incorrect orientation, but usually this hex hole orientation is unimportant. So here we slightly cheated – the test vertices selected for verification with isinside() are some of the outmost ones of the solid (we selected vertices that have maximal/minimal values of each of the coordinates and sums/differences of their pairs and all three) – the hex hole does not have any of them. Most of the fasteners we use are such socket head ones, this approach would not work for hex bolts and nuts – they need to have one of the hex faces colored.

    And there are other objects that require some color hints in the part model, like a square plate having no or symmetrical holes, or a turned (round) part with the symmetrical holes in it – two of the gyration radii are the same and the corresponding axes can not be unambiguously determined. You may color one of the side faces of the square faces or color the inside of a hole to break a symmetry. If the part does not have individually selectable faces in the CAD program you may create a small colored cylinder or box, align it with one of the flat faces and boolean cut it from the object, and then boolean add it to it. The result object has the same shape for CAM, but it will have a colored square or circle on one of the faces – sufficient for unambiguous definition of the orientation.

    Converting multi-level assemblies

    The program can convert multi-level assemblies that contain sub-assemblies, and MC393F21 design includes such subassembly models. For this model I created proxy single-solids object in each subassembly (there are 3 used – 0393-07-02, 0393-07-03 and 0393-07-01 that in turn includes three of 0393-07-03), and when exporting the top model to STEP the actual content of the subassembly models was blanked, and the proxy objects were visible, so they were exported. The result STEP file was placed in a separate directory from the part files, and optional suffix (‘-ASSY’ by default) was added top the file name before the extension. Each subassembly was exported to STEP twice – once with only proxy object visible (that file is used to find matches in the higher level of assembly), result saved in the same parts step directory, and the second time it is exported as assembly (to a different directory and with optional suffix) with the proxy object blanked. Conversion of these complex assemblies should be performed bottom-up – first the lower level sub-assemblies, then the the ones that use them. The output X3D directory will have both partName.x3d (converted from a proxy object ) and partname-ASSY.x3d that has the actual model of the subassembly. The partname-ASSY.x3d files are not in the index and they do not have the source STEP files in the parts directory, so they are not used when matching objects in the assembly. When all the possible objects are matched and the program generates the model X3D file, it replaces inline references to partName.x3d to partname-ASSY.x3d if such files exist in the X3D directory.

    by Andrey Filippov at December 21, 2015 10:09 AM

    ZeptoBARS

    Dallas Semiconductor DS1000Z : weekend die-shot

    Dallas Semiconductor DS1000Z - 5 tap delay line.
    Die size 2074x1768 µm.


    December 21, 2015 09:48 AM

    Altus Metrum

    TeleLaunchTwo

    TeleLaunchTwo — A Smaller Wireless Launch Controller

    I've built a wireless launch control system for NAR and OROC. Those are both complex systems with a single controller capable of running hundreds of pads. And, it's also complicated to build, with each board hand-made by elves in our Portland facility (aka, my office).

    A bunch of people have asked for something simpler, but using the same AES-secured two-way wireless communications link, so I decided to just build something and see if we couldn't eventually come up with something useful. I think if there's enough interest, I can get some boards built for reasonable money.

    Here's a picture of the system; you can see the LCO end in a box behind the pad end sitting on the bench.

    Radio Link

    Each end has a 35mW 70cm digital transceiver (so, they run in the 440MHz amateur band). These run at 19200 baud with fancy forward error correction and AES security to keep the link from accidentally (or maliciously) firing a rocket at the wrong time. Using a bi-directional link, we also get igniter continuity and remote arming information at the LCO end.

    The LCO Box

    In the LCO box, there's a lipo battery to run the device, so it can be completely stand-alone. It has three switches and a button -- an arming switch for each of two channels, a power switch and a firing button. The lipo can be charged by opening up the box and plugging it into a USB port.

    The Pad Box

    The pad box will have some cable glands for the battery and each firing circuit. On top, it will have two switches, a power switch and an arming switch. The board has two high-power FETs to drive the igniters. That should be more reliable than using a relay, while also allowing the board to tolerate a wider range of voltages -- the pad box can run on anything from 12V to 24V.

    The Box

    Unlike the OROC and NAR systems, these boards are both designed to fit inside a specific box, the Hammond 1554E, and use the mounting standoffs provided. This box is rated at NEMA 4X, which means it's fairly weather proof. Of course, I have to cut holes in the box, but I found some NEMA 4X switches, will use cable glands for the pad box wiring and can use silicone around the BNC connector. The result should be pretty robust. I also found a pretty solid-seeming BNC connector, which hooks around the edge of the board and also clips on to the board.

    Safety Features.

    There's an arming switch on both ends of the link, and you can't fire a rocket without having both ends armed. That provides an extra measure of safety while working near the pad. The pad switch is a physical interlock between the power supply and the igniters, so even if the software is hacked or broken, disarming the box means the igniters won't fire.

    The LCO box beeps constantly when either arming switch is selected, giving you feedback that the system is ready to fire. And you can see on any LED whether the pad box is also armed.

    by keithp's rocket blog at December 21, 2015 03:51 AM

    December 05, 2015

    Harald Welte

    Volunteer for Openmoko.org USB Product ID maintenance

    Back when Openmoko took the fall, we donated the Openmoko, Inc. USB Vendor ID to the community and started the registry of free Product ID allocations at http://wiki.openmoko.org/wiki/USB_Product_IDs

    Given my many other involvements and constant overload, I've been doing a poor job at maintaining it, i.e. handling incoming requests.

    So I'm looking for somebody who can reliably take care of it, including

    • reviewing if the project fulfills the criteria (hardware or software already released under FOSS license)
    • entering new allocations to the wiki
    • informing applicants of their allocation

    The amount of work is actually not that much (like one mail per week), but it needs somebody to reliably respond to the requests in a shorter time frame than I can currently do.

    Please let me know if you'd like to volunteer.

    by Harald Welte at December 05, 2015 11:00 PM

    Anyone interested in supporting SMPP interworking at 32C3?

    Sylvain brought this up yesterday: Wouldn't it be nice to have some degree of SMS interfacing from OpenBSC/OsmoNITB to the real world at 32C3? It is something that we've never tried so far, and thus definitely worthy of testing.

    Of course, full interworking is not possible without assigning public MSISDN to all internal subscribers / 'extensions' how we call them.

    But what would most certainly work is to have at least outbound SMS working by means of an external SMPP interface.

    The OsmoNITB-internal SMSC speaks SMPP already (in the SMSC role), so we would need to implement some small amount of glue logic that behaves as ESME (external SMS entity) towards both OsmoNITB as well as some public SMS operator/reseller that speaks SMPP again.

    Now of course, sending SMS to public operators doesn't come for free. So in case anyone reading this has access to SMPP at public operators, resellers, SMS hubs, it would be interesting to see if there is a chance for some funding/sponsoring of that experiment.

    Feel free to contact me if you see a way to make this happen.

    by Harald Welte at December 05, 2015 11:00 PM

    December 04, 2015

    Harald Welte

    python-libsmpp works great with OsmoNITB

    Since 2012 we have support for SMPP in OsmoNITB (the network-in-the-box version of OpenBSC). So far I've only used it from C and Erlang code.

    Yesterday I gave python-smpplib from https://github.com/podshumok/python-smpplib a try and it worked like a charm. Of course one has to get the details right (like numbering plan indication).

    In case anyone is interested in interfacing OsmoNITB SMPP from python, I've put a working example to send SMS at http://cgit.osmocom.org/mncc-python/tree/smpp_test.py

    by Harald Welte at December 04, 2015 11:00 PM

    December 01, 2015

    Harald Welte

    Python tool to talk to OsmoNITB MNCC interface

    I've been working on a small python tool that can be used to attach to the MNCC interface of OsmoNITB. It implements the 04.08 CC state machine with our MNCC primitives, including support for RTP bridge mode of the voice streams.

    The immediate first use case for this was to be able to generate MT calls to a set of known MSISDNs and load all 14 TCH/H channels of a single-TRX BTS. It will connect the MT calls in pairs, so you end up with 7 MS-to-MS calls.

    The first working version of the tool is available from

    The code is pretty hacky in some places. That's partially due to the fact that I'm much more familiar in the C, Perl and Erlang world than in python. Still I thought it's a good idea to do it in python to enable more people to use/edit/contribute to it.

    I'm happy for review / cleanup suggestion by people with more Python-foo than I have.

    Architecturally, I decided to do things a bit erlang-like, where we have finite state machines in an actor models, and message passing between the actors. This is what happens with the GsmCallFsm()'s, which are created by the GsmCallConnector() representing both legs of a call and the MnccActor() that wraps the MNCC socket towards OsmoNITB.

    The actual encoding/decoding of MNCC messages is auto-generated from the mncc header file #defines, enums and c-structures by means of ctypes code generation.

    mncc_test.py currently drops you into a python shell where you can e.g. start more / new calls by calling functions like connect_call("7839", "3802") from that shell. Exiting the shell by quit() or Ctrl+C will terminate all call FSMs and terminate.

    by Harald Welte at December 01, 2015 11:00 PM

    November 30, 2015

    Free Electrons

    UN climate conference: switching to “green” electricity

    Wind turbines in Denmark

    The United Nations 2015 Climate Change Conference is an opportunity for everyone to think about contributing to the transition to renewable and sustainable energy sources.

    One way to do that is to buy electricity that is produced from renewable resources (solar, wind, hydro, biomass…). With the worldwide opening of the energy markets, this should now be possible in most parts of the world.

    So, with a power consumption between 4,000 and 5,000 kWh per year, we have decided to make the switch for our main office in Orange, France. But how to choose a good supplier?

    Greenpeace turned out to be a very good source of information about this topic, comparing the offerings from various suppliers, and finding out which ones really make serious investments in renewable energy sources.

    Here are the countries for which we have found Greenpeace rankings:
    Australia France

    If you find a similar report for your country, please let us know, and we will add it to this list.

    Back to our case, we chose Enercoop, a French cooperative company only producing renewable energy. This supplier has by far the best ranking from Greenpeace, and stands out from more traditional suppliers which too often are just trading green certificates, charging consumers a premium rate without investing by themselves in green energy production.

    The process to switch to a green electricity supplier was very straightforward. All we needed was an electricity bill and 15 minutes of time, whether you are an individual or represent a company. From now on, Enercoop will guarantee that for every kWh we consume from the power grid, they will inject the same amount of energy into the grid from renewable sources. There is no risk to see more power outages than before, as the national company operating and maintaining the grid stays the same.

    It’s true our electricity is going to cost about 20% more than nuclear electricity, but at least, what we spend is going to support local investments in renewable energy sources, that don’t degrade the fragile environment that keeps us alive.

    Your comments and own tips are welcome!

    by Michael Opdenacker at November 30, 2015 10:37 AM

    November 28, 2015

    Bunnie Studios

    Products over Patents

    NPR’s Audrey Quinn from Planet Money explores IP in the age of rapid manufacturing by investigating the two-wheel self balancing scooter. When patent paperwork takes more time and resources than product production, more agile systems of idea sharing evolve to keep up with the new pace of innovation.

    If the embedded audio player above isn’t working, try this link. Seems like the embed isn’t working outside the US…

    by bunnie at November 28, 2015 11:20 PM

    MLTalk with Joi Ito, Nadya Peek and me

    I gave an MLTalk at the MIT Media Lab this week, where I disclose a bit more about the genesis of the Orchard platform used to build, among other things, the Burning Man sexually generated light pattern badge I wrote about a couple months back.

    The short provocation is followed up by a conversation with Joi Ito, the Director of the Media Lab, and Nadya Peek, a renowned expert in digital fabrication from the CBA (and incidentally, the namesake of the Peek Array in the Novena laptop) about supply chains, digital fabrication, trustability, and things we’d like to see in the future of low volume manufacturing.

    I figured I’d throw a link here on the blog to break the monotony of name that wares. Sorry for the lack of new posts, but I’ve been working on a couple of books and magazine articles in the past months (some of which have made it to print: IEEE Spectrum, Wired) which have consumed most of my capacity for creative writing.

    by bunnie at November 28, 2015 12:50 AM

    Name that Ware November 2015

    This month’s ware is shown below:

    And below are views of the TO-220 devices which are folded over in the top-down photo:

    We continue this month with the campaign to get Nava Whiteford permission to buy a SEM. Thanks again to Nava for providing another interesting ware!

    by bunnie at November 28, 2015 12:22 AM

    Winner, Name that Ware October 2015

    The ware for October 2015 was a Lecroy LT342L. Nava notes that it was actually manufactured by Iwatsu, but the ASICs on the inside all say LeCroy. Congrats to Carl Smith for nailing it, email me for your prize and happy Thanksgiving!

    by bunnie at November 28, 2015 12:22 AM

    November 19, 2015

    Geoffrey L. Barrows - DIY Drones

    360 degree stereo vision and obstacle avoidance on a Crazyflie nano quadrotor

    (More info and full post here)

    I've been experimenting with putting 360 degree vision, including stereo vision, onto a Crazyflie nano quadrotor to assist with flight in near-Earth and indoor environments. Four stereo boards, each holding two image sensor chips and lenses, together see in all directions except up and down. We developed the image sensor chips and lenses in-house for this work, since there is nothing available elsewhere that is suitable for platforms of this size. The control processor (on the square PCB in the middle) uses optical flow for position control and stereo vision for obstacle avoidance. The system uses a "supervised autonomy" control scheme in which the operator gives high level commands via control sticks (e.g. "move this general direction") and the control system implements the maneuver while avoiding nearby obstacles. All sensing and processing is performed on board. The Crazyflie itself was unmodified other than a few lines of code in it's firmware to get the target Euler angles and throttle from the vision system.

    Below is a video from a few flights in an indoor space. This is best viewed on a laptop or desktop computer to see the annotations in the video. The performance is not perfect, but much better than the pure "hover in place" systems I had flown in the past since obstacles are now avoided. I would not have been able to fly in the last room without the vision system to assist me! There are still obvious shortcomings- for example the stereo vision currently does not respond to blank walls- but we'll address this soon...

    by Geoffrey L. Barrows at November 19, 2015 11:28 PM

    November 15, 2015

    Harald Welte

    GSM test network at 32C3, after all

    Contrary to my blog post yesterday, it looks like we will have a private GSM network at the CCC congress again, after all.

    It appears that Vodafone Germany (who was awarded the former DECT guard band in the 2015 spectrum auctions) is not yet using it in December, and they agreed that we can use it at the 32C3.

    With this approval from Vodafone Germany we can now go to the regulator (BNetzA) and obtain the usual test license. Given that we used to get the license in the past, and that Vodafone has agreed, this should be a mere formality.

    For the German language readers who appreciate the language of the administration, it will be a Frequenzzuteilung für Versuchszwecke im nichtöffentlichen mobilen Landfunk.

    So thanks to Vodafone Germany, who enabled us at least this time to run a network again. By end of 2016 you can be sure they will have put their new spectrum to use, so I'm not that optimistic that this would be possible again.

    by Harald Welte at November 15, 2015 11:00 PM

    November 14, 2015

    Harald Welte

    No GSM test network at 32C3

    I currently don't assume that there will be a GSM network at the 32C3.

    Ever since OpenBSC was created in 2008, the annual CCC congress was a great opportunity to test OpenBSC and related software with thousands of willing participants. In order to do so, we obtained a test licence from the German regulatory authority. This was never any problem, as there was a chunk of spectrum in the 1800 MHz GSM band that was not allocated to any commercial operator, the so-called DECT guard band. It's called that way as it was kept free in order to ensure there is no interference between 1800 MHz GSM and the neighboring DECT cordless telephones.

    Over the decades, it was determined on a EU level that this guard band might not be necessary, or at least not if certain considerations are taken for BTSs deployed in that band.

    When the German regulatory authority re-auctioned the GSM spectrum earlier this year, they decided to also auction the frequencies of the former DECT guard band. The DECT guard band was awarded to Vodafone.

    This is a pity, as this means that people involved with cellular research or development of cellular technology now have it significantly harder to actually test their systems.

    In some other EU member states it is easier, like in the Netherlands or the UK, where the DECT guard band was not treated like any other chunk of the GSM bands, but put under special rules. Not so in Germany.

    To make a long story short: Without the explicit permission of any of the commercial mobile operators, it is not possible to run a test/experimental network like we used to ran at the annual CCC congress.

    Given that

    • the event is held in the city center (where frequencies are typically used and re-used quite densely), and
    • an operator has nothing to gain from permitting us to test our open source GSM/GPRS implementations,

    I think there is little chance that this will become a reality.

    If anyone has really good contacts to the radio network planning team of a German mobile operator and wants to prove me wrong: Feel free to contact me by e-mail.

    Thanks to everyone involved with the GSM team at the CCC events, particularly Holger Freyther, Daniel Willmann, Stefan Schmidt, Jan Luebbe, Peter Stuge, Sylvain Munaut, Kevin Redon, Andreas Eversberg, Ulli (and everyone else whom I may have forgot, my apologies). It's been a pleasure!

    Thanks also to our friends at the POC (Phone Operation Center) who have provided interfacing to the DECT, ISDN, analog and VoIP network at the events. Thanks to roh for helping with our special patch requests. Thanks also to those entities and people who borrowed equipment (like BTSs) in the pre-sysmocom years.

    So long, and thanks for all the fish!

    by Harald Welte at November 14, 2015 11:00 PM

    November 12, 2015

    Elphel

    NC393 progress update: 14MPix Sensor Front End is up and running

    10398 Sensor Front End with 14MPix MT9F002

    10398 Sensor Front End with 14MPix MT9F002

    Sensors (ON Semiconductor MT9F002) and blank PCBs arrived in time and so I was able to hand-assemble two 10398 boards and start testing them. I had some minor problems getting data output from the first board, but it turned out to be just my bad soldering of the sensor, the second board worked immediately. To my surprise I did not have any problems with HiSPi decoder that I simulated using the sensor model I wrote myself from the documentation, so the color bar test pattern appeared almost immediately, followed by the real acquired images. I kept most of the sensor settings unmodified from the default values, just selected the correct PLL multiplier, output signal levels (1.8V HiVCM – compatible with the FPGA) and packetized format, the only other registers I had to adjust manually were exposure and color analog gains.

    As it was reasonable to expect, sensitivity of the 14MPix sensor is lower than that of the 5MPix MT9P006 – our initial estimate is that it is 4 times lower, but this needs more careful measurements to find out exposure required for pixel saturation with the same illumination. Analog channel gains for both sensors we set slightly higher than minimal ones for the saturation, but such rough measurements could easily miss a factor of 1.5. MT9F002 offers more controls over the signal chain gains, but any (even analog) gain in the chain that boosts signal above the minimal needed for saturation proportionally reduces used “well capacity”, while I expect the Full Well Capacity (FWC) is already not very high for the 1.4μm x1.4 μm pixel sensor. And decrease in the number of electrons stored in a pixel accordingly increases the relative shot noise that reveals itself in the highlight areas. We will need to accurately measure FWC of the MT9F002 and have better sensitivity comparison, including that of the binned mode, but I expect to find out that 5MPix sensor are not obsolete yet and for some applications may still have advantages over the newer sensors.

    Image acquired with 5 MPix MT9P006 sensor, 1/2000 s

    Image acquired with 5 MPix MT9P006 sensor, 1/2000 s

    Image acquired with 14MPix MT9F002 sensor, 1/500 s

    Image acquired with 14MPix MT9F002 sensor, 1/500 s

    Both sensors used identical f=4.5mm F3.0 lenses, the 5MPix one lens is precisely adjusted during calibration, the lens of the 14MPix sensor is just attached and focused by hand using the lens thread, no tilt correction was performed. Both images are saved at 100% JPEG quality (lossless compression) to eliminate compression artifacts, both used in-camera simple 3×3 demosaic algorithm. The 14 MPix image has visible checkerboard pattern caused by the difference of the 2 green values (green in red row, and green in the blue row). I’ll check that it is not caused by some FPGA code bug I might introduce (save as raw image and do de-bayer on a host computer), but it may also be caused by pixel cross-talk in the sensor. In any case it is possible to compensate or at least significantly reduce in the output data.

    MT9F002 transmits data over 5 differential 100Ω pairs: 1 clock pair and 4 data lanes. For the initial tests I used our regular 70mm flex cable used for the parallel interface sensors, and just soldered 5 of 100Ω resistors to the contacts at the camera side end. It did work and I did not even have to do any timing adjustments of the differential lanes. We’ll do such adjustments in the future to get to the centers of the data windows – both the sensor and the FPGA code have provisions for that. The physical 100Ω load resistors were needed as it turned out that Xilinx Zynq has on-chip differential termination only for the 2.5V (or higher) supply voltages on the regular (not “high performance”) I/Os and this application uses 1.8V interface power – I missed this part of documentation and assumed that all the differential inputs have possibility to turn on differential termination. 660 Mbps/lane data rate is not too high and I expect that it will be possible to use short cables with no load resistors at all, adding such resistors to the 10393 board is not an option as it has to work with both serial and parallel sensor interfaces. Simultaneously we designed and placed an order for dedicated flex cables 150mm long, if that will work out we’ll try longer (450mm) controlled impedance cables.

    by andrey at November 12, 2015 08:43 PM

    November 10, 2015

    ZeptoBARS

    Infineon BFR740 - 42GHz BJT : weekend die-shot

    Infineon BFR740L3RH - bipolar SiGe RF transistor with transition frequency of 42Ghz in a very small leadless package (TSLP-3-9 - 0.6×1×0.31mm).
    Die size 305x265 µm.



    After metal etch we can see that it's not that simple:


    Main active area (scale 1px = 57nm):



    November 10, 2015 05:18 AM

    November 07, 2015

    Harald Welte

    Progress on the Linux kernel GTP code

    It is always sad if you start to develop some project and then never get around finishing it, as there are too many things to take care in parallel. But then, days only have 24 hours...

    Back in 2012 I started to write some generic Linux kernel GTP tunneling code. GTP is the GPRS Tunneling Protocol, a protocol between core network elements in GPRS networks, later extended to be used in UMTS and even LTE networks.

    GTP is split in a control plane for management and the user plane carrying the actual user IP traffic of a mobile subscriber. So if you're reading this blog via a cellular interent connection, your data is carried in GTP-U within the cellular core network.

    To me as a former Linux kernel networking developer, the user plane of GTP (GTP-U) had always belonged into kernel space. It is a tunneling protocol not too different from many other tunneling protocols that already exist (GRE, IPIP, L2TP, PPP, ...) and for the user plane, all it does is basically add a header in one direction and remove the header in the other direction. User data, particularly in networks with many subscribers and/or high bandwidth use.

    Also, unlike many other telecom / cellular protocols, GTP is an IP-only protocol with no E1, Frame Relay or ATM legacy. It also has nothing to do with SS7, nor does it use ASN.1 syntax and/or some exotic encoding rules. In summary, it is nothing like any other GSM/3GPP protocol, and looks much more of what you're used from the IETF/Internet world.

    Unfortunately I didn't get very far with my code back in 2012, but luckily Pablo Neira (one of my colleagues from netfilter/iptables days) picked it up and brought it along. However, for some time it has been stalled until recently it was thankfully picked up by Andreas Schultz and now receives some attention and discussion, with the clear intention to finish + submit it for mainline inclusion.

    The code is now kept in a git repository at http://git.osmocom.org/osmo-gtp-kernel/

    Thanks to Pablo and Andreas for picking this up, let's hope this is the last coding sprint before it goes mainline and gets actually used in production.

    by Harald Welte at November 07, 2015 11:00 PM

    Osmocom Berlin meetings

    Back in 2012, I started the idea of having a regular, bi-weekly meeting of people interested in mobile communications technology, not only strictly related to the Osmocom projects and software. This was initially called the Osmocom User Group Berlin. The meetings were held twice per month in the rooms of the Chaos Computer Club Berlin.

    There are plenty of people that were or still are involved with Osmocom one way or another in Berlin. Think of zecke, alphaone, 2b-as, kevin, nion, max, prom, dexter, myself - just to name a few.

    Over the years, I got "too busy" and was no longer able to attend regularly. Some people kept it alive (thanks to dexter!), but eventually they were discontinued in 2013.

    Starting in October 2015, I started a revival of the meetings, two have been held already, the third is coming up next week on November 11.

    I'm happy that I had the idea of re-starting the meeting. It's good to meet old friends and new people alike. Both times there actually were some new faces around, most of which even had a classic professional telecom background.

    In order to emphasize the focus is strictly not on Osmocom alone ( particularly not about its users only), I decided to rename the event to the Osmocom Meeting Berlin.

    If you're in Berlin and are interested in mobile communications technology on the protocol and radio side of things, feel free to join us next Wednesday.

    by Harald Welte at November 07, 2015 11:00 PM

    November 04, 2015

    Elphel

    NC393 progress update: one gigapixel per second (12x faster than NC353)

    All the PCBs for the new camera: 10393, 10389 and 10385 are modified to rev “A”, we already received the new boards from the factory and now are waiting for the first production batch to be build. The PCB changes are minor, just moving connectors away from the board edge to simplify mechanical design and improve thermal contact of the heat sink plate to the camera body. Additionally the 10389A got m2 connector instead of the mSATA to accommodate modern SSD.

    While waiting for the production we designed a new sensor board (10398) that has exactly the same dimensions, same image sensor format as the current 10338E and so it is compatible with the hardware for the calibrated sensor front ends we use in photogrammetric cameras. The difference is that this MT9F002 is a 14 MPix device and has high-speed serial interface instead of the legacy parallel one. We expect to get the new boards and the sensors next week and will immediately start working with this new hardware.

    In preparation for the faster sensors I started to work on the FPGA code to make it ready for the new devices. We planned to use modern sensors with the serial interfaces from the very beginning of the new camera design, so the hardware accommodates up to 8 differential data lanes plus a clock pair in addition to the I²C and several control signals. One obviously required part is the support for Aptina HiSPi (High Speed Serial Pixel) interface that in case of MT9F002 uses 4 differential data lanes, each running at 660 Mbps – in 12-bit mode that corresponds to 220 MPix/s. Until we’ll get the actual sensors I could only simulate receiving of the HiSPi data using the sensor model written ourselves following the interface documentation. I’ll need yet to make sure I understood the documentation correctly and the sensor will produce output similar to what we modeled.

    The sensor interface is not the only piece of the code that needed changes, I also had to increase significantly the bandwidth of the FPGA signal processing and to modify the I²C sequencer to support 2-byte register addresses.

    Data that FPGA receives from the sensor passes through the several clock domains until it is stored in the system memory as a sequence of compressed JPEG/JP4 frames:

    • Sensor data in each channel enters FPGA at a pixel clock rate, and subsequently passes through vignetting correction/scaling module, gamma conversion module and histogram calculation modules. This chain output is buffered before crossing to the memory clock domain.
    • Multichannel DDR3 memory controller records sensor data in line-scan order and later retrieves it in overlapping (for JPEG) or non-overlapping (for JP4) square tiles.
    • Data tiles retrieved from the external DDR3 memory are sent to the compressor clock domain to be processed with JPEG algorithm. In color JPEG mode compressor bandwidth has to be 1.5 higher than the pixel rate, as for 4:2:0 encoding each 16×16 pixels macroblock generate 6 of the 8×8 image blocks – 4 for Y (intensity) and 2 – for color components. In JP4 mode when the de-mosaic algorithm runs on the host computer the compressor clock rate equals the pixel rate.
    • Last clock domain is 150MHz used by the AXI interface that operates in 64-bit parallel mode and transfers the compressed data to the system memory.

    Two of these domains used double clock rate for some of the processing stages – histograms calculation in the pixel clock domain and Huffman encoder/bit stuffer in the compressor. In the previous NC353 camera pixel clock rate was 96MHz (192 MHz for double rate) and compressor rate was 80MHz (160MHz for double rate). The sensor/compressor clock rates difference reflects the fact that the sensor data output is not uniform (it pauses during inactive lines) and the compressor can process the frame at a steady rate.

    MT9F002 image sensor has the output pixel rate of 220MPix/s with the average (over the full frame) rate of 198MPix/s. Using double rate clocks (440MHz for the sensor channel and 400MHz for the compressor) would be rather difficult on Zynq, so I needed first to eliminate such clocks in the design. It was possible to implement and test this modification with the existing sensor, and now it is done – four of the camera compressors each run at 250 MHz (even on “-1”, or “slow” speed grade silicon) making it total of 1GPix/sec. It does not need to have 4 separate sensors running simultaneously – a single high speed imager can provide data for all 4 compressors, each processing every 4-th frame as each image is processed independently.

    At this time the memory controller will be a bottleneck when running all four MT9F002 sensors simultaneously as it currently provides only 1600MB/s bandwidth that may be marginally sufficient for four MT9F002 sensor channels and 4 compressor channels each requiring 200MB/s (bandwidth overhead is just a few percent). I am sure it will be possible to optimize the memory controller code to run at higher rate to match the compressors. We already identified which parts of the memory controller need to be modified to support 1.5x clock increase to the total of 2400MB/s. And as the production NC393 camera will have higher speed grade SoC there will be an extra 20% performance increase for the same code. That will provide bandwidth sufficient not just to run 4 sensors at full speed and compress the output data, but to do some other image manipulation at the same time.

    Compared to the previous Elphel NC353 camera the new NC393 prototype already is tested to have 12x higher compressor bandwidth (4 channels instead of one and 250MPix/s instead of 80MPix/s), we plan to have the actual sensor with a full data processing chain results soon.

    by andrey at November 04, 2015 06:41 AM

    November 03, 2015

    Free Electrons

    Linux 4.3 released, Free Electrons contributions inside

    Adelie PenguinThe 4.3 kernel release has been released just a few days ago. For details about the big new features in this release, we as usual recommend to read LWN.net articles covering the merge window: part 1, part 2 and part 3.

    According to the KPS statistics, there were 12128 commits in this release, and with 110 patches, Free Electrons is the 20th contributing company. As usual, we did some contributions to this release, though a somewhat smaller number than for previous releases.

    Our main contributions this time around:

    • On the support for Atmel ARM SoCs
      • Alexandre Belloni contributed a fairly significant number of cleanups: description of the slow clock in the Device Tree, removal of left-over from platform-data usage in device drivers (no longer needed now that all Atmel ARM platforms use the Device Tree).
      • Boris Brezillon contributed numerous improvements to the atmel-hlcdc, which is the DRM/KMS driver for the modern Atmel ARM SoCs. He added support for several SoCs to the driver (SAMA5D2, SAMA5D4, SAM9x5 and SAM9n12), added PRIME support, and support for the RGB565 and RGB444 output configurations.
      • Maxime Ripard improved the dmaengine drivers for Atmel ARM SoCs (at_hdmac and at_xdmac) to add memset and scatter-gather memset capabilities.
    • On the support for Allwinner ARM SoCs
      • Maxime Ripard converted the SID driver to the newly introduced nvmem framework. Maxime also did some minor pin-muxing and clock related updates.
      • Boris Brezillon fixed some issues in the NAND controller driver.
    • On the support for Marvell EBU ARM SoCs
      • Thomas Petazzoni added the initial support for suspend to RAM on Armada 38x platforms. The support is not fully enabled yet due to remaining stability issues, but most of the code is in place. Thomas also did some minor updates/fixes to the XOR and crypto drivers.
      • Grégory Clement added the initial support for standby, a mode that allows to forcefully put the CPUs in deep-idle mode. For now, it is not different from what cpuidle provides, but in the future, we will progressively enable this mode to shutdown PHY and SERDES lanes to save more power.
    • On the RTC subsystem, Alexandre Belloni did numerous fixes and cleanups to the rx8025 driver, and also a few to the at91sam9 and at91rm9200 drivers.
    • On the common clock framework, Boris Brezillon contributed a change to the ->determinate_rate() operation to fix overflow issues.
    • On the PWM subsystem, Boris Brezillon contributed a number of small improvements/cleanups to the subsystem and some drivers: addition of a pwm_is_enabled() helper, migrate drivers to use the existing helper functions when possible, etc.

    The detailed list of our contributions is:

    by Thomas Petazzoni at November 03, 2015 03:11 PM

    November 02, 2015

    Harald Welte

    Germany's excessive additional requirements for VAT-free intra-EU shipments

    Background

    At my company sysmocom we are operating a small web-shop providing small tools and accessories for people interested in mobile research. This includes programmable SIM cards, SIM card protocol tracers, adapter cables, duplexers for cellular systems, GPS disciplined clock units, and other things we consider useful to people in and around the various Osmocom projects.

    We of course ship domestic, inside the EU and world-wide. And that's where the trouble starts, at least since 2014.

    What are VAT-free intra-EU shipments?

    As many readers of this blog (at least the European ones) know, inside the EU there is a system by which intra-EU sales between businesses in EU member countries are performed without charging VAT.

    This is the result of different countries having different amount of VAT, and the fact that a business can always deduct the VAT it spends on its purchases from the VAT it has to charge on its sales. In order to avoid having to file VAT return statements in each of the countries of your suppliers, the suppliers simply ship their goods without charging VAT in the first place.

    In order to have checks and balances, both the supplier and the recipient have to file declarations to their tax authorities, indicating the sales volume and the EU VAT ID of the respective business partners.

    So far so good. This concept was reasonably simple to implement and it makes the life easier for all involved businesses, so everyone participates in this scheme.

    Of course there always have been some obstacles, particularly here in Germany. For example, you are legally required to confirm the EU-VAT-ID of the buyer before issuing a VAT-free invoice. This confirmation request can be done online

    However, the Germany tax authorities invented something unbelievable: A Web-API for confirmation of EU-VAT-IDs that has opening hours. Despite this having rightfully been at the center of ridicule by the German internet community for many years, it still remains in place. So there are certain times of the day where you cannot verify EU-VAT-IDs, and thus cannot sell products VAT-free ;)

    But even with that one has gotten used to live.

    Gelangensbescheinigung

    Now in recent years (since January 1st, 2014) , the German authorities came up with the concept of the Gelangensbescheinigung. To the German reader, this newly invented word already sounds ugly enough. Literal translation is difficult, as it sounds really clumsy. Think of something like a reaching-its-destination-certificate

    So now it is no longer sufficient to simply verify the EU-VAT-ID of the buyer, issue the invoice and ship the goods, but you also have to produce such a Gelangensbescheinigung for each and every VAT-free intra-EU shipment. This document needs to include

    • the name and address of the recipient
    • the quantity and designation of the goods sold
    • the place and month when the goods were received
    • the date of when the document was signed
    • the signature of the recipient (not required in case of an e-mail where the e-mail headers show that the messages was transmitted from a server under control of the recipient)

    How can you produce such a statement? Well, in the ideal / legal / formal case, you provide a form to your buyer, which he then signs and certifies that he has received the goods in the destination country.

    First of all, I find if offensive that I have to ask my customers to make such declarations in the first place. And then even if I accept this and go ahead with it, it is my legal responsibility to ensure that he actually fills this in.

    What if the customer doesn't want to fill it in or forgets about it?

    Then I as the seller am liable to pay 19% VAT on the purchase he made, despite me never having charged those 19%.

    So not only do I have to generate such forms and send them with my goods, but I also need a business process of checking for their return, reminding the customers that their form has not yet been returned, and in the end they can simply not return it and I loose money. Great.

    Track+Trace / Courier Services

    Now there are some alternate ways in which a Gelangensbescheinigung can be generated. For example by a track+trace protocol of the delivery company. However, the requirements to this track+trace protocol are so high, that at least when I checked in late 2013, the track and trace protocol of UPS did not fulfill the requirements. For example, a track+trace protocol usually doesn't show the quantity and designation of goods. Why would it? UPS just moves a package from A to B, and there is no customs involved that would require to know what's in the package.

    Postal Packages

    Now let's say you'd like to send your goods by postal service. For low-priced non-urgent goods, that's actually what you generally want to do, as everything else is simply way too expensive compared to the value of the goods.

    However, this is only permitted, if the postal service you use produces you with a receipt of having accepted your package, containing the following mandatory information:

    • name and address of the entity issuing the receipt
    • name and address of the sender
    • name and address of the recipient
    • quantity and type of goods
    • date of having receive the goods

    Now I don't know how this works in other countries, but in Germany you will not be able to get such a receipt form the postal office.

    In fact I inquired several times with the legal department of Deutsche Post, up to the point of sending a registered letter (by Deutsche Post) to Deutsche Post. They have never responded to any of those letters!

    So we have the German tax authorities claiming yes, of course you can still do intra-EU shipments to other countries by postal services, you just need to provide a receipt, but then at the same time they ask for a receipt indicating details that no postal receipt would ever show.

    Particularly a postal receipt would never confirm what kind of goods you are sending. How would the postal service know? You hand them a package, and they transfer it. It is - rightfully - none of their business what its content may be. So how can you ask them to confirm that certain goods were received for transport ?!?

    Summary

    So in summary:

    Since January 1st, 2014, we now have German tax regulations in force that make VAT free intra-EU shipments extremely difficult to impossible

    • The type of receipt they require from postal services is not provided by Deutsche Post, thereby making it impossible to use Deutsche Post for VAT free intra-EU shipments
    • The type of track+trace protocol issued by UPS does not fulfill the requirements, making it impossible to use them for VAT-free intra-EU shipments
    • The only other option is to get an actual receipt from the customer. If that customer doesn't want to provide this, the German seller is liable to pay the 19% German VAT, despite never having charged that to his customer

    Conclusion

    To me, the conclusion of all of this can only be one:

    German tax authorities do not want German sellers to sell VAT-free goods to businesses in other EU countries. They are actively trying to undermine the VAT principles of the EU. And nobody seem to complain about it or even realize there is a problem.

    What a brave new world we live in.

    by Harald Welte at November 02, 2015 11:00 PM

    October 31, 2015

    Harald Welte

    small tools: rtl8168-eeprom

    Some time ago I wrote a small Linux command line utility that can be used to (re)program the Ethernet (MAC) address stored in the EEPROM attached to an RTL8168 Ethernet chip.

    This is for example useful if you are a system integrator that has its own IEEE OUI range and you would like to put your own MAC address in devices that contain the said Realtek etherent chips (already pre-programmed with some other MAC address).

    The source code can be obtaned from: http://git.sysmocom.de/rtl8168-eeprom/

    by Harald Welte at October 31, 2015 11:00 PM

    small tools: gpsdate

    In 2013 I wrote a small Linux program that can be usded to set the system clock based on the clock received from a GPS receiver (via gpsd), particularly when a system is first booted. It is similar in purpose to ntpdate, but of course obtains time not from ntp but from the GPS receiver.

    This is particularly useful for RTC-less systems without network connectivity, which come up with a completely wrong system clock that needs to be properly set as soon as th GPS receiver finally has acquired a signal.

    I asked the ntp hackers if they were interested in merging it into the official code base, and their response was (summarized) that with a then-future release of ntpd this would no longer be needed. So the gpsdate program remains an external utility.

    So in case anyone else might find the tool interesting: The source code can be obtained from http://git.sysmocom.de/gpsdate/

    by Harald Welte at October 31, 2015 11:00 PM

    October 29, 2015

    Harald Welte

    Deutsche Bank / unstable interfaces

    Deutsche Bank is a large, international bank. They offer services world-wide and are undoubtedly proud of their massive corporate IT department.

    Yet, at the same time, they fail to get the most fundamental principles of user/customer-visible interfaces wrong: Don't change them. If you need to change them, manage the change carefully.

    In many software projects, keeping the API or other interface stable is paramount. Think of the Linux kernel, where breaking a userspace-visible interface is not permitted. The reasons are simple: If you break that interface, _everyone_ using that interface will need to change their implementation, and will have to synchronize that with the change on the other side of the interface.

    The internet online banking system of Deutsche Bank in Germany permits the upload of transactions by their customers in a CSV file format.

    And guess what? They change the file format from one day to the other.

    • without informing their users in advance, giving them time to adopt their implementations of that interface
    • without documenting the exact nature of the change
    • adding new fields to the CSV in the middle of the line, rather than at the end of the line, to make sure things break even more

    Now if you're running a business and depend on automatizing your payments using the interface provided by Deutsche Bank, this means that you fail to pay your suppliers in time, you hastily drop/delay other (paid!) work that you have to do in order to try to figure out what exactly Deutsche Bank decided to change completely unannounced, from one day to the other.

    If at all, I would have expected this from a hobbyist kind of project. But seriously, from one of the worlds' leading banks? An interface that is probably used by thousands and thousands of users? WTF?!?

    by Harald Welte at October 29, 2015 11:00 PM

    October 28, 2015

    Harald Welte

    The VMware GPL case

    My absence from blogging meant that I didn't really publicly comment on the continued GPL violations by VMware, and the 2015 legal case that well-known kernel developer Christoph Hellwig has brought forward against VMware.

    The most recent update by the Software Freedom Conservancy on the VMware GPL case can be found at https://sfconservancy.org/news/2015/oct/28/vmware-update/

    In case anyone ever doubted: I of course join the ranks of the long list of Linux developers and other stakeholders that consider VMware's behavior completely unacceptable, if not outrageous.

    For many years they have been linking modified Linux kernel device drivers and entire kernel subsystems into their proprietary vmkernel software (part of ESXi). As an excuse, they have added a thin shim layer under GPLv2 which they call vmklinux. And to make all of this work, they had to add lots of vmklinux specific API to the proprietary vmkernel. All the code runs as one program, in one address space, in the same thread of execution. So basically, it is at the level of the closest possible form of integration between two pieces of code: Function calls within the same thread/process.

    In order to make all this work, they had to modify their vmkernel, implement vmklinux and also heavily modify the code they took from Linux in the first place. So the drivers are not usable with mainline linux anymore, and vmklinux is not usable without vmkernel either.

    If all the above is not a clear indication that multiple pieces of code form one work/program (and subsequently must be licensed under GNU GPLv2), what should ever be considered that?

    To me, it is probably one of the strongest cases one can find about the question of derivative works and the GPL(v2). Of course, all my ramblings have no significance in a court, and the judge may rule based on reports of questionable technical experts. But I'm convinced if the court was well-informed and understood the actual situation here, it would have to rule in favor of Christoph Hellwig and the GPL.

    What I really don't get is why VMware puts up the strongest possible defense one can imagine. Not only did they not back down in lengthy out-of-court negotiations with the Software Freedom Conservancy, but also do they defend themselves strongly against the claims in court.

    In my many years of doing GPL enforcement, I've rarely seen such a dedication and strong opposition. This shows the true nature of VMware as a malicious, unfair entity that gives a damn sh*t about other peoples' copyright, the Free Software community and its code of conduct as a whole, and the Linux kernel developers in particular.

    So let's hope they waste a lot of money in their legal defense, get a sufficient amount of negative PR out of this to the point of tainting their image, and finally obtain a ruling upholding the GPL.

    All the best to Christoph and the Conservancy in fighting this fight. For those readers that want to help their cause, I believe they are looking for more supporter donations.

    by Harald Welte at October 28, 2015 11:00 PM

    October 27, 2015

    Harald Welte

    What I've been busy with

    Those who don't know me personally and/or stay in touch more closely might be wondering what on earth happened to Harald in the last >= 1 year?

    The answer would be long, but I can summarize it to I disappeared into sysmocom. You know, the company that Holger and I founded four years ago, in order to commercially support OpenBSC and related projects, and to build products around it.

    In recent years, the team has been growing to the point where in 2015 we had suddenly 9 employees and a handful of freelancers working for us.

    But then, that's still a small company, and based on the projects we're involved, that team has to cover a variety of topics (next to the actual GSM/GPRS related work), including

    • mechanical engineering (enclosure design)
    • all types of electrical engineering
      • AC/electrical wiring/fusing on DIN rails
      • AC/DC and isolated DC/DC power supplies (based on modules)
      • digital design
      • analog design
      • RF design
    • prototype manufacturing and testing
    • software development
      • bare-iron bootloader/os/application on Cortex-M0
      • NuttX on Cortex-M3
      • OpenAT applications on Sierra Wireless
      • custom flavors of Linux on several different ARM architectures (TI DaVinci, TI Sitara)
      • drivers for various peripherals including Ethernet Switches, PoE PSE controller
      • lots of system-level software for management, maintenance, control

    I've been involved in literally all of those topics, with most of my time spent on the electronics side than on the software side. And if software, the more on the bootloader/RTOS side, than on applications.

    So what did we actually build? It's unfortunately still not possible to disclose fully at this point, but it was all related to marine communications technology. GSM being one part of it, but only one of many in the overall picture.

    Given the quite challenging breadth/width of the tasks at hand and problem to solve, I'm actually surprised how much we could achieve with such a small team in a limited amount of time. But then, there's virtually no time left, which meant no gpl-violations.org work, no blogging, no progress on the various Osmocom Erlang projects for core network protocols, and last but not least no Taiwan holidays this year.

    ately I see light at the end of the tunnel, and there is again a bit ore time to get back to old habits, and thus I

    • resurrected this blog from the dead
    • resurrected various project homepages that have disappeared
    • started some more work on actual telecom stuff (osmo-iuh, for example)
    • restarted the Osmocom Berlin Meeting

    by Harald Welte at October 27, 2015 11:00 PM

    Andrew Zonenberg, Silicon Exposed

    New GPG key

    Hi everyone,

    I've been busy lately and haven't had a chance to post much. There will be a pretty good sized series coming up in a month or two (hopefully) on my next-gen FPGA cluster and JTAG stuff but I'm holding off until I have something better to write about.

    In the meantime, I've decided that my circa 2009 GPG key is long overdue for replacement so I've issued a new one and am posting the fingerprints in multiple public locations (this being one).

    The new key fingerprint is:
    859B A7BA DE9C 0BD5 EC01  FF36 3461 7AB9 B31C 7D7C

    Verification message signed with my old key:
    http://thanatos.virtual.antikernel.net/unlisted/new-key-notes.txt.asc

    by Andrew Zonenberg (noreply@blogger.com) at October 27, 2015 10:37 PM

    Bunnie Studios

    Name that Ware October 2015

    The Ware for October 2015 is shown below.

    …and one of the things that plugs into the slots visible in the photo above as an extra hint…

    Thanks again to Nava Whiteford for sharing this ware. Visit his blog and help him get permission from his wife to buy a SEM!

    by bunnie at October 27, 2015 07:54 AM

    Winner, Name that Ware September 2015

    The Ware for September 2015 is a Powerex CM600HA-24H, which met its demise serving as a driver for a tesla coil in the Orage sculpture (good guess 0xbadf00d!). I have a thing for big transistors, and I was very pleased to be gifted this even though it was busted. At $300 a piece, it’s not something I just get up and buy because I want to wear it around as a piece of jewelry; but it did make for a great, if not heavy, necklace. And it was interesting to take apart to see what was inside!

    As for the winner, Jimmyjo was the first to guess exactly the model of the IGBT. Congrats, email me for your prize!

    by bunnie at October 27, 2015 07:53 AM

    October 26, 2015

    Harald Welte

    Weblog + homepage online again

    On October 31st, 2014, I had reeboote my main server for a kernel upgrade, and could not mount the LUKS crypto volume ever again. While the techincal cause for this remains a mystery until today (it has spawned some conspiracy theories), I finally took some time to recover some bits and pieces from elsewhere. I didn't want this situation to drag on for more than a year...

    Rather than bringing online the old content using sub-optimal and clumsy tools to generate static content (web sites generated by docbook-xml, blog by blosxom), I decided to give it a fresh start and try nikola, a more modern and actively maintained tool to generate static web pages and blogs.

    The blog is now available at http://laforge.gnumonks.org/blog/ (a redirect from the old /weblog is in place, for those who keep broken links for more than 12 months). The RSS feed URLs are different from before, but there are again per-category feeds so people (and planets) can subscribe to the respective category they're interested in.

    And yes, I do plan to blog again more regularly, to make this place not just an archive of a decade of blogging, but a place that is alive and thrives with new content.

    My personal web site is available at http://laforge.gnumonks.org/ while my (similarly re-vamped) freelancing business web site is also available again at http://hmw-consulting.de/.

    I still need to decide what to do about the old http://gnumonks.org/ site. It still has its old manual web 1.0 structure from the late 1990ies.

    I've also re-surrected http://openezx.org/ and http://ftp.gpl-devices.org/ as well as http://ftp.gnumonks.org/ (old content). Next in line is gpl-violations.org, which I also intend to convert to nikola for maintenance reasons.

    by Harald Welte at October 26, 2015 11:00 PM

    ZeptoBARS

    CHANGJIANG MMBT2222A - npn BJT transistor : weekend die-shot

    Unlike OnSemi MMBT2222A CHANGJIANG MMBT2222A has both smaller die size and simpler layout (BC847-like) - which should cause significantly lower hFE on high collector currents.

    Die size 234x234 µm.


    October 26, 2015 07:26 AM

    October 19, 2015

    ZeptoBARS

    Linear LT1021-5 ±0.05% precision reference : weekend die-shot

    Expected heavy duty digital correction? Nope. Just 15 fuses and buried Zener - truly a work of art.
    Die size 2354x1364 µm.


    October 19, 2015 08:05 AM

    October 11, 2015

    ZeptoBARS

    ST UA741 - the opamp : weekend die-shot

    µA741 was the first "usable", widespread solid state opamp, mainly due to integrated capacitor for frequency correction (which we now take for granted in general-purpose opamps). This chip was reimplemented numerous times since 1968, like this ST UA741 in 2001. You can also take a look at historic schematic of µA741 here.

    Die size 1073x993 µm.


    October 11, 2015 05:32 PM

    October 06, 2015

    Video Circuits

    Experiments using the Rutt-Etra Analog Video Synthesizer and Siegel colorizer, 1975

    Video Synthesis Experiments, excerpts from  Edin Velez on vimeo.

    A rare example of the Siegel Colorizer in use in this short excerpt.
    http://edinvelez.com

    by Chris (noreply@blogger.com) at October 06, 2015 12:26 PM

    September 29, 2015

    Elphel

    Google is testing AI to respond to privacy requests

    Robotic customer support fails while pretending to be an outsourced human. Last week I searched with Google for Elphel and I got a wrong spelled name, wrong address and a wrong phone number.

    Google search for Elphel

    Google search for Elphel

    A week ago I tried Google Search for our company (usually I only check recent results using last week or last 3 days search) and noticed that on the first result page there is a Street View of my private residence, my home address pointing to a business with the name “El Phel, Inc”.

    Yes, when we first registered Elphel in 2001 we used our home address, and even the first $30K check from Google for development of the Google Books camera came to this address, but it was never “El Phel, Inc.” Later wire transfers with payments to us for Google Books cameras as well as Street View ones were coming to a different address – 1405 W. 2200 S., Suite 205, West Valley City, Utah 84119. In 2012 we moved to the new building at 1455 W. 2200 S. as the old place was not big enough for the panoramic camera calibration.

    I was not happy to see my house showing as the top result when searching for Elphel, it is both breach of my family privacy and it is making harm to Elphel business. Personally I would not consider a 14-year old company with international customer base a serious one if it is just a one-man home-based business. Sure you can get the similar Street View results for Google itself but it would not come out when you search for “Google”. Neither it would return wrongly spelled business name like “Goo & Gel, Inc.” and a phone number that belongs to a Baptist church in Lehi, Utah (update: they changed the phone number to the one of Elphel).

    Google original location

    Google original location

    Honestly there was some of our fault too, I’ve seen “El Phel” in a local Yellow Pages, but as we do not have a local business I did not pay attention to that – Google was always good at providing relevant information in the search results, extracting actual contact information from the company “Contacts” page directly.

    Noticing that Google had lost its edge in providing search results (Bing and Yahoo show relevant data), I first contacted Yellow Pages and asked them to correct information as there is no “El Phel, Inc.” at my home address and that I’m not selling any X-Ray equipment there. They did it very promptly and the probable source of the Google misinformation (“probable” as Google does not provide any links to the source) was gone for good.

    I waited for 24 hours hoping that Google will correct the information automatically (post on Elphel blog appears in Google search results in 10 – 19 seconds after I press “Publish” button). Nothing happened – same “El Phel, Inc.” in our house.

    So I tried to contact Google. As Google did not provide source of the search result, I tried to follow recommendations to correct information on the map. And the first step was to log in with Google account, since I could not find a way how to contact Google without such account. Yes, I do have one – I used Gmail when Google was our customer, and when I later switched to other provider (I prefer to use only one service per company, and I selected to use Google Search) I did not delete the Gmail account. I found my password and was able to log in.

    First I tried to select “Place doesn’t exist” (There is no such company as “El Phel, Inc.” with invalid phone number, and there is no business at my home address).

    Auto confirmation came immediately:
    From: Google Maps <noreply-maps-issues@google.com>
    Date: Wed, Sep 23, 2015 at 9:55 AM
    Subject: Thanks for the edit to El Phel Inc
    To: еlphеl@gmаil.cоm
    Maps
    Thank you
    Your edit is being reviewed. Thanks for sharing your knowledge of El Phel Inc.
    El Phel Inc
    3200 Elmer St, Magna, UT, United States
    Your edit
    Place doesn't exist
    Edited on Sep 23, 2015 · In review
    Keep exploring,
    The Google Maps team
    © 2015 Google Inc. 1600 Amphitheatre Parkway, Mountain View, CA 94043
    You've received this confirmation email to update you about your editing activities on Google Maps.

    But nothing happened. Two days later I tried with different option (there was no place to provide text entry)
    Your edit
    Place is private

    No results either.

    Then I tried to follow the other link after the inappropriate search result – “Are you the business owner?” (I’m not at owner of the non-existing business, but I am an owner of my house). And yes, I had to use my Gmail account again. There were several options how I prefer to be contacted – I selected “by phone”, and shortly after a female-voiced robot called. I do not have a habit of talking to robots, so I did not listen what it said waiting for keywords like: “press 0 to talk to a representative” or “Please stay on the line…”, but it never said anything like this and immediately hang up.

    Second time I selected email contact, but it seems to me that the email conversation was with some kind of Google Eliza. This was the first email:

    From : local-help@google.com
    To : andrey@elphel.com
    Subject : RE: [7-2344000008781] Google Local Help
    Date : Thu, 24 Sep 2015 22:48:47 -0700
    Add Label
    Hi,
    Greetings from Google.
    After investigating, i found that here is an existing page on Google (El Phel Inc-3200 S Elmer St Magna, UT 84044) which according to your email is incorrect information.
    Apologies for the inconvenience andrey, however as i can see that you have created a page for El Phel Inc, hence i would first request you to delete the Business page if you aren't running any Business. Also you can report a problem for incorrect information on Maps,Here is an article that would provide you further clarity on how to report a problem or fix the map.
    In case you have any questions feel free to reply back on the same email address and i would get back to you.
    Regards,
    Rohit
    Google My Business Support.

    This robot tried to mimic a kids language (without capitalizing “I” and the first letter of my name), and the level of understanding the matter was below that of a human (it was Google, not me who created that page, I just wanted it to be removed).

    I replied as I thought it still might be a human, just tired and overwhelmed by so many privacy-related requests they receive (the email came well after hours in United States).

    From : andrey <andrey@elphel.com>
    To : local-help@google.com
    Subject : RE: [7-2344000008781] Google Local Help
    Date : Fri, 25 Sep 2015 00:16:21 -0700
    Hello Rohit,
    I never created such page. I just tried different ways to contact Google to remove this embarrassing link. I did click on "Are you the business owner" (I am the owner of this residence at 3200 S Elmer St Magna, UT 84044) as I hoped that when I'll get the confirmation postcard I'll be able to reply that there is no business at this residential address).
    I did try link "how to report a problem or fix the map", but I could not find a relevant method to remove a search result that does not reference external page as a source, and assigns my home residence to the search results of the company, that has a different (than listed) name, is located in a different city (West Valley City, 84119, not in Magna, 84044), and has a different phone number.
    So please, can you remove that incorrect information?
    Andrey Filippov

    Nothing happened either, then on Sunday night (local time) came another email from “Rohit”:

    From : local-help@google.com
    To : andrey@elphel.com
    Subject : RE: [7-2344000008781] Google Local Help
    Date : Sun, 27 Sep 2015 18:11:44 -0700
    Hi,
    Greetings from Google.
    I am working on your Business pages and would let you know once get any update.
    Please reply back on the same email address in case of any concerns.
    Regards,
    Rohit
    Google My Business Support

    You may notice that it had the same ticket number, so the sender had all the previous information when replying. For any human capable of using just Google Search it would be not more than 15-30 seconds to find out that their information is incorrect and either remove it completely (as I asked) or replace with some relevant one.

    And there is another detail that troubles me. Looking at the time/days when the “Google My Business Support” emails came, and the name “Rohit” it may look like it came from India. While testing a non-human communications Google might hope that correspondents would more likely attribute some inconsistencies in the generated emails to the cultural differences and miss actual software flaws. Does Google count on us being somewhat racists?

    Following provided links I was not able to get any response from a human representative, only two robots (phone and email) contacted me. I hope that this post will work better and help to cure this breach of my family privacy and end harm this invalid information provided by a so respected Internet search company causes to the business. I realize that robots will take over more and more of our activities (and we are helping that to happen ourselves), but maybe this process sometimes goes too fast?

    by andrey at September 29, 2015 04:25 AM

    September 28, 2015

    Bunnie Studios

    Sex, Circuits & Deep House

    P9010002
    Cari with the Institute Blinky Badge at Burning Man 2015. Photo credit: Nagutron.

    This year for Burning Man, I built a networked light badge for my theme camp, “The Institute”. Walking in the desert at night with no light is a dangerous proposition – you can get run over by cars, bikes, or twist an ankle tripping over an errant bit of rebar sticking out of the ground. Thus, the outrageous, bordering grotesque, lighting spectacle that Burning Man becomes at night grows out of a central need for safety in the dark. While a pair of dimly flashing red LEDs should be sufficient to ensure one’s safety, anything more subtle than a Las Vegas strip billboard tends to go unnoticed by fast-moving bikers thanks to the LED arms race that has become Burning Man at night.

    I wanted to make a bit of lighting that my campmates could use to stay safe – and optionally stay classy by offering a range of more subtle lighting effects. I also wanted the light patterns to be individually unique, allowing easy identification in dark, dusty nights. However, diddling with knobs and code isn’t a very social experience, and few people bring laptops to Burning Man. I wanted to come up with a way for people to craft an identity that was inherently social and interactive. In an act of shameless biomimicry, I copied nature’s most popular protocol for creating individuals – sex.

    By adding a peer-to-peer radio in each badge, I was able to implement a protocol for the breeding of lighting patterns via sex.



    Some examples of the unique light patterns possible through sex.

    Sex

    When most people think of sex, what they are actually thinking about is sexual intercourse. This is understandable, as technology allows us to have lots of sexual intercourse without actually accomplishing sexual reproduction. Still, the double-entendre of saying “Nice lights! Care to have sex?” is a playful ice breaker for new interactions between camp mates.

    Sex, in this case, is used to breed the characteristics of the badge’s light pattern as defined through a virtual genome. Things like the color range, blinking rate, and saturation of the light pattern are mapped into a set of diploid (two copies of each gene) chromosomes (code) (spec). Just as in biological sex, a badge randomly picks one copy of each gene and packages them into a sperm and an egg (every badge is a hermaphrodite, much like plants). A badge’s sperm is transmitted wirelessly to another host badge, where it’s mixed with the host’s egg and a new individual blending traits of both parents is born. The new LED pattern replaces the current pattern on the egg donor’s badge.

    Biological genetic traits are often analog, not digital – height or weight are not coded as discrete values in a genome. Instead, observed traits are the result of a complex blending process grounded in the minutiae of metabolic pathways and the efficacy of enzymes resulting from the DNA blueprint and environment. The manifestation of binary situations like recessive vs. dominant is often the result of a lot of gain being applied to an analog signal, thus causing the expressed trait to saturate quickly if it’s expressed at all.

    In order to capture the wonderful diversity offered by sex, I implement quantitative traits in the light genome. Instead of having a single bit for each trait, it’s a byte, and there’s an expression function that combines the values from each gene (alleles) to derive a final observed trait (phenotype).

    By carefully picking expression functions, I can control how the average population looks. Let’s consider saturation (I used an HSV colorspace, instead of RGB, which makes it much easier to create aesthetically pleasing color combinations). A highly saturated color is vivid and bright. A less saturated color appears pastel, until finally it’s washed out and looks just white or gray (a condition analogous to albinism).

    If I want albinism to be rare, and bright colors to be common, the expression function could be a saturating add. Thus, even if one allele (copy of the gene) has a low value, the other copy just needs to be a modest value to result in a bright, vivid coloration. Albinism only occurs when both copies have a fairly low value.




    Population makeup when using saturating addition to combine the maternal and paternal saturation values. Albinism – a badge light pattern looking white or gray – happens only when both maternal and paternal values are small. ‘S’ means large saturation, and ‘s’ means little saturation. ‘SS’ and ‘Ss’ pairings of genes leads to saturated colors, while only the ‘ss’ combination leads to a net low saturation (albinism).

    On the other hand, if I wanted the average population to look pastel, I can simply take the average of each allele, and take that to be the saturation value. In this case, a bright color can only be achieved in both alleles have a high value. Likewise, an albino can only be achieved if both alleles have a low value.




    Population makeup when using averaging to combine the maternal and paternal saturation values. The most common case is a pastel palette, with vivid colors and albinism both suppressed in the population.

    For Burning Man, I chose saturating addition as the expression function, to have the population lean toward vivid colors. I implemented other features such as cyclic dimming, hue rotation, and color range using similar techniques.

    It’s important when thinking about biological genes to remember that they aren’t like lines of computer code. Rather, they are like the knobs on an analog synth, and the resulting sound depends not just on the position of the knob, but where it is in the signal chain how it interacts with other effects.

    Gender and Consent

    Beyond genetics, there is a minefield of thorny decisions to be made when implementing the social policies and protocols around sex. What are the gender roles? And what about consent? This is where technology and society collide, making for a fascinating social experiment.

    I wanted everyone to have an opportunity to play both gender roles, so I made the badges hermaphroditic, in the sense that everyone can give or receive genetic material. The “maternal” role receives sperm, combines it with an egg derived from the currently displayed light pattern, and replaces its light pattern with a new hybrid of both. The “paternal” role can transmit a sperm derived from the currently displayed pattern. Each badge has the requisite ports to play both roles, and thus everyone can play the role of male or female simply by being either the originator of or responder to a sex request.

    This leads us to the question of consent. One fundamental flaw in the biological implementation of sex is the possibility of rape: operating the hardware doesn’t require mutual consent. I find the idea of rape disgusting, even if it’s virtual, so rape is disallowed in my implementation. In other words, it’s impossible for a paternal badge to force a sperm into a maternal badge: male roles are not allowed to have sex without first being asked by a female role. Instead, the person playing the female role must first initiate sex with a target mate. Conversely, female roles can’t steal sperm from male roles; sperm is only generated after explicit consent from the male. Assuming consent is given, a sperm is transmitted to the maternal badge and the protocol is complete. This two-way handshake assures mutual consent.

    This non-intuitive and partially role-reversed implementation of sex lead to users asking support questions akin to “I’m trying to have sex, but why am I constantly being denied?” and my response was – well, did you ask your potential mate if it was okay to have sex first? Ah! Consent. The very important but often overlooked step before sex. It’s a socially awkward question, but with some practice it really does become more natural and easy to ask.

    Some users were enthusiastic early adopters of explicit consent, while others were less comfortable with the question. It was interesting to see the ways straight men would ask other straight men for sex – they would ask for “ahem, blinky sex” – and anecdotally women seemed more comfortable and natural asking to have sex (regardless of the gender of the target user).

    As an additional social experiment, I introduced a “rare” trait (pegged at ~3% of a randomly generated population) consisting of a single bright white pixel that cycles around the LED ring. I wanted to see if campmates would take note and breed for the rare trait simply because it’s rare. At the end of the week, more people were expressing the rare phenotype than at the beginning, so presumably some selective breeding for the trait did happen.

    In the end, I felt that having sex to breed interesting light patterns was a lot more fun for everyone than tweaking knobs and sliders in a UI. Also, because traits are inherited through sexual reproduction, by the end of the event one started to see families of badges gaining similar traits, but thanks to the randomness inherent in sex you could still tell individuals apart in the dark by their light patterns.

    Finding Friends

    Implementing sex requires a peer-to-peer radio. So why not also use the radio to help people locate nearby friends? Seems like a good idea on the outside, but the design of this system is a careful balance between creating a general awareness of friends in the area vs. creating a messaging client.

    Personally, one of the big draws of going to Burning Man is the ability to unplug from the Internet and live in an environment of intimate immediacy – if you’re physically present, you get 100% of my attention; otherwise, all bets are off. Email, SMS, IRC, and other media for interaction (at least, I hear there are others, but I don’t use them…) are great for networking and facilitating business, but they detract from focusing on the here and now. For me there’s something ironic about seeing a couple in a fancy restaurant, both hopelessly lost staring deeply into their smartphones instead of each other’s eyes. Being able to set an auto-responder for two weeks which states that your email will never be read is pretty liberating, and allows me to open my mind up to trains of thought that can take days to complete. Thus, I really wanted to avoid turning the badge into a chat client, or any sort of communication medium that sets any expectation of reading messages and responding in a timely fashion.

    On the other hand, meeting up with friends at Burning Man is terribly hard. It’s life before the cell phone – if you’re old enough to remember that. Without a cell phone, you have a choice between enjoying the music, stalking around the venue to find friends, or dancing in one spot all night long so you’re findable. Simply knowing if my friends have finally showed up is a big help; if they haven’t arrived yet, I can get lost in the music and check out the sound in various parts of the venue until they arrive.

    Thus, I designed a very simple protocol which will only reveal if your friends are nearby, and nothing else. Every badge emits a broadcast ping every couple of seconds. Ideally, I’d use an RSSI (receive signal strength indicator) to figure out how far the ping is, but due to a quirk of the radio hardware I was unable to get a reliable RSSI reading. Instead, every badge would listen for the pings, and decrement the ping count at a slightly slower average rate than the ping broadcast. Thus, badges solidly within radio range would run up a ping count, and as people got farther and farther away, the ping count would decrease as pings gradually get lost in the noise.


    Friend finding UI in action. In this case, three other badges are nearby, SpacyRedPhage, hap, and happybunnie:-). SpacyRedPhage is well within range of the radio, and the other two are farther away.

    The system worked surprisingly well. The reliable range of the radio worked out to be about 200m in practice, which is about the sound field of a major venue at Burning Man. It was very handy for figuring out if my friends had left already for the night, or if they were still prepping at camp; and there was one memorable reunion at sunrise where a group of my camp mates drove our beloved art car, Dr. Brainlove, to Robot Heart and I was able to quickly find them thanks to my badge registering a massive amount of pings as they drove into range.

    Hardware Details

    I’m not so lucky that I get to design such a complex piece of hardware exclusively for a pursuit as whimsical as Burning Man. Rather, this badge is a proof-of concept of a larger effort to develop a new open-source platform for networked embedded computers (please don’t call it IoT) backed by a rapid deployment supply chain. Our codename for the platform is Orchard.

    The Burning Man badge was our first end-to-end test of Orchard’s “supply chain as a service” concept. The core reference platform is fairly well-documented here, and as you can see looks nothing like the final badge.


    Bottom: orchard reference design; top: orchard variant as customized for Burning Man.

    However, the only difference at a schematic level between the reference platform and the badge is the addition of 14 extra RGB LEDs, the removal of the BLE radio, and redesign of the captouch electrode pattern. Because the BOM of the badge is a strict subset of the reference design, we were able to go from a couple prototypes in advance of a private Crowd Supply campaign to 85 units delivered at the door of camp mates in about 2.5 months – and the latency of shipping units from China to front doors in the US accounts for one full month of that time.




    The badge sports an interactive captouch surface, an OLED display, 900MHz ISM band peer-to-peer radio, microphone, accelerometer, and more!

    If you’re curious, you can view documentation about the Orchard platform here, and discuss it at the Kosagi forum.

    Reflection

    As an engineer, my “default” existence is confined on four sides by cost, schedule, quality, and specs, with a sprinkling of legal, tax, and regulatory constraints on top. It’s pretty easy to lose your creative spark when every day is spent threading the needle of profit and loss.

    Even though the implementation of Burning Man’s principles of decommodification and gifting is far from perfect, it’s sufficient to enable me to loosen the shackles of my daily existence and play with technology as a medium for enhancing human interactions, and not simply as a means for profit. In other words, thanks to the values of the community, I’m empowered and supported to build stuff that wouldn’t make sense for corporate shareholders, but might improve the experiences of my closest friends. I think this ability to leave daily existence behind for a couple weeks is important for staying balanced and maintaining perspective, because at least for me maximizing profit is rarely the same as maximizing happiness. After all, a warm smile and a heartfelt hug is priceless.

    by bunnie at September 28, 2015 10:16 AM

    September 26, 2015

    ZeptoBARS

    Diodes BC847BS - matched BJT pair : weekend die-shot

    Diodes Incorporated BC847BS - pair of npn transistors with matched hFE. Internally it has 2 separate dies.
    Die size 306x306 µm.



    Second die:


    Lithography repeatability is definitely better than this. Parameter matching is likely achieved by using adjacent dies from the wafer. 2 dies are used because one cannot place 2 BC847 transistors on the same die without significant changes to the technology (and it won't be BC847 anymore) - die bulk is transistor terminal.

    Difference between the dies. Top metal is quite non-uniform optically (as usual) over the area, but this is unlikely to cause any electrical characteristics impact. Would be interesting to make similar difference photo for non-matched transistors.

    September 26, 2015 01:22 PM

    September 25, 2015

    Free Electrons

    Free Electrons at the Linux Kernel Summit 2015

    Kernel Summit 2012 in San DiegoThe Linux Kernel Summit is, as Wikipedia says, an annual gathering of the top Linux kernel developers, and is an invitation-only event.

    In 2012 and 2013, several Free Electrons engineers have been invited and participated to a sub-event of the Linux Kernel Summit, the “ARM mini-kernel summit”, which was more specifically focused on ARM related developments in the kernel. Gregory Clement and Thomas Petazzoni went to the event in 2012 in San Diego (United States) and in 2013, Maxime Ripard, Gregory Clement, Alexandre Belloni and Thomas Petazzoni participated to the ARM mini-kernel summit in Edinburgh (UK).

    This year, Thomas Petazzoni has been invited to the Linux Kernel Summit, which will take place late October in Seoul (South Korea). We’re happy to see that our continuous contributions to the Linux Kernel are recognized and allow us to participate to such an invitation-only event. For us, participating to the Linux Kernel Summit is an excellent way of keeping up-to-date with the latest Linux kernel developments, and also where needed, give our feedback from our experience working in the embedded industry with several SoC, board and system vendors.

    by Thomas Petazzoni at September 25, 2015 11:26 AM

    September 24, 2015

    ZeptoBARS

    TL431 - adjustable shunt regulator : weekend die-shot

    TL431 is another adjustable shunt regulator often used in linear supplies with external power transistor.
    Die size 592x549 µm.


    September 24, 2015 01:34 PM

    September 18, 2015

    Elphel

    NC393 progress update: all hardware is operational

    10393 with 4 image sensors

    10393 with 4 image sensors



    Finally all the parts of the NC393 prototype are tested and we now can make the circuit diagram, parts list and PCB layout of this board public. About the half of the board components were tested immediately when the prototype was built – it was almost two years ago – those tests did not require any FPGA code, just the initial software that was mostly already available from the distributions for the other boards based on the same Xilinx Zynq SoC. The only missing parts were the GPL-licensed initial bootloader and a few device drivers.

    Implementation of the 16-channel DDR3 memory controller

    Getting to the next part – testing of the FPGA-controlled DDR3 memory took us longer: the overall concept and the physical layer were implemented in June 2014, timing calibration software and application modules for image image recording and retrieval were implemented in the spring of 2015.

    Initial image acquisition and compression

    When the memory was proved operational what remained untested on the board were the sensor connections and the high speed serial links for SATA. I decided not to make any temporary modules just to check the sensor physical connections but to port the complete functionality of the image acquisition, processing and compression of the existing NC353 camera (just at a higher clock rate and multiple channels instead of a single one) and then test the physical operation together with all the code.

    Sensor acquisition channels: From the sensor interface to the video memory buffer

    The image acquisition code was ported (or re-written) in June, 2015. This code includes:

    • Sensor physical interface – currently for the existing 10338 12-bit parallel sensor front ends, with provisions for the up to 8-lanes + clock high speed serial sensors to be added. It is also planned to bond together multiple sensor channels to interface single large/high speed sensor
    • Data and clock synchronization, flexible phase adjustment to recover image data and frame format for different camera configurations, including sensor multiplexers such as the 10359 board
    • Correction of the lens vignetting and fine-step scaling of the pixel values, individual for each of the multiplexed sensors and color channel
    • Programmable gamma-conversion of the image data
    • Writing image data to the DDR3 image buffer memory using one or several frame buffers per channel, both 8bpp and 16bpp (raw image data, bypassing gamma-conversion) formats are supported
    • Calculation of the histograms, individual for each color component and multiplexed sensor
    • Histograms multiplexer and AXI interface to automatically transfer histogram data to the system memory
    • I²c sequencer controls image sensors over i²c interface by applying software-provided register changes when the designated frame starts, commands can be scheduled up to 14 frames in advance
    • Command frame sequencer (one per each sensor channel) schedules and applies system register writes (such as to control compressors) synchronously to the sensors frames, commands can be scheduled up to 14 frames in advance

    JPEG/JP4 compression functionality

    Image compressors get the input data from the external video buffer memory organized as 16×16 pixel macroblocks, in the case of color JPEG images larger overlapping tiles of 18×18 (or 20×20) pixels are needed to interpolate “missing” colors from the input Bayer mosaic input. As all the data goes through the buffer there is no strict requirement to have the same number of compressor and image acquisition modules, but the initial implementation uses 1:1 ratio and there are 4 identical compressor modules instantiated in the design. The compressor output data is multiplexed between the channels and then transferred to the system memory using 1 or 2 of Xilinx Zynq AXI HP interfaces.

    This portion of the code is also based on the earlier design used in the existing NC353 camera (some modules are reusing code from as early as 2002), the new part of the code was dealing with a flexible memory access, older cameras firmware used hard-wired 20×20 pixel tiles format. Current code contains four identical compressor channels providing JPEG/JP4 compression of the data stored in the dedicated DDR3 video buffer memory and then transferring result to the system memory circular buffers over one or two of the Xilinx Zynq four AXI HP channels. Other camera applications that use sensor data for realtime processing rather than transferring all the image data to the host may reduce number of the compressors. It is also possible to use multiple compressors to work on a single high resolution/high frame rate sensor data stream.

    Single compressor channel contains:

    • Macroblock buffer interface requests 32×18 or 32×16 pixel tiles from the memory and provides 18×18 overlapping macroblocks for JPEG or 16×16 non-overlapping macroblocks for JP4 using 4KB memory buffer. This buffer eliminates the need to re-read horizontally overlapping pixels when processing consecutive macroblocks
    • Pixel buffer interface retrieves data from the memory buffer providing sequential pixel stream of 18×18 (16×16) each macroblock
    • Color conversion module selects one of the sub-modules : csconvert18a, csconvert_mono, csconvert_jp4 or csconvertjp4_diff to convert possibly overlapping Bayer mosaic tiles to a sequence of 8×8 blocks for 2-d DCT transform
    • Average value extractor calculates average value in each 8×8 block, subtracts it before DCT and restores after – that reduces data width in DCT processing module
    • xdct393 performs 2-d DCT for each 8×8 pixel block
    • Quantizer re-orders each block DCT components from the scan-line to zigzag sequence and quantizes them using software-calculated and loaded tables. This is the only lossy stage of the JPEG algorithm, when the compression quality is set to 100% all the coefficients are set to 1 and the conversion is lossless
    • Focus sharpness module accumulates amount of high-frequency components to estimate image sharpness over specified window to facilitate (auto) focusing. It also allows to replace on-the-fly average block value of the image with amount of the high frequency components in the same blog, providing visual indication of the focus sharpness
    • RLL encoder converts the continuous 64 samples/per block data stream in to RLL-encoded data bursts
    • Huffman encoder uses software-generated tables to provide additional lossless compression of the RLL-encoded data. This module (together with the next one) runs and double pixel clock rate and has an input FIFO between the clock domains
    • Bit stuffer consolidates variable length codes coming out from the Huffman encoder into fixed-width words, escaping each 0xff byte (these bytes have special meaning in JPEG stream) by inserting 0×00 right after it. It additionally provides image timestamp and length in bytes after the end of the compressed data before padding the data to multiple of 32-byte chunks, this metadata has fixed offset before the 32-byte aligned data end
    • Compressor output FIFO converts 16-bit wide data from the bit stuffer module received at a double compressor clock rate (currently 200MHz) and provides 64-bit wide output at the maximal clock rate (150MHz) for AXI HP port of Xilinx Zynq, it also provides buffering when several compressor channels share the same AXI HP channel

    Another module – 4:1 compressor multiplexer is shared between multiple compressor channels. It is possible (defined by Verilog parameters) to use either single multiplexer with one AXI HP port (SAXIHP1) and 4 compressor inputs (4:1), or two of these modules interfacing two AXI HP channels (SAXIHP1 and SAXIHP2), reducing number of concurrent inputs of each multiplexer to just 2 (2 × 2:1). Multiplexers use fair arbitration policy and consolidate AXI bursts to full 16×64bits when possible. Status registers provide image data pointers for last write and last frame start, each as sent to AXI and after confirmation using AXI write response channel.

    Porting remaining FPGA functionality to the new camera

    Additional modules where ported to complete the existing NC353 functionality:

    • Camera real time clock that provides current time with 1 microsecond resolution to various modules. It has accumulator-based correction circuitry to compensate for crystal oscillator frequency variations
    • Inter-camera synchronization module generates and/or receives synchronization signals between multiple camera modules or other devices. When used between the cameras, each synchronization pulse has a timestamp information attached in a serialized form, so multiple synchronized cameras have all the simultaneous images metadata contain the same time code generated by the “master” camera
    • Event logger records data from multiple sources, such as GPS, IMU, image acquisition events and external signal channel (like a vehicle wheel rotation sensor)

    Simulating the full codebase

    All that code was written (either new or modified from the existing NC353 FPGA project by the end of July, 2015 and then the most fun began. First I used the proven NC353 code to simulate (using Icarus Verilog + GtkWave) with the same input data as the one provided to the new x393 code, following the signal chains and making sure that each checkpoint data matched. That was especially useful when debugging JPEG compressor, as the intermediate data is difficult to follow. When I was developing the first JPEG compressor in 2002 I had to save output data from the various processing stages and compare it to the software compression output of the same image data from the similar stages. Having working implementation helped a lot and in 3 weeks I was able to match the output from all the processing stages described above except the event logger that I did not verify yet.

    Testing the hardware

    Then it was the time for translating the Verilog test fixture code to the Python programs running on the target hardware extending the code developed earlier for the memory controller. The code is able to parse Verilog parameter definition files – that simplified synchronization of the Verilog and Python code. It would be nice to use something like Cocotb in the future and completely get rid of the Verilog to Python manual translation.

    As I am designing code for the reconfigurable FPGA (not for ASIC) my usual strategy is not to get high simulation coverage, but to simulate to a “barely working” stage, then use the actual hardware (that runs tens of millions times faster than the simulator), detect the problems and then try to achieve the same condition with the simulation. But when I just started to run the hardware I realized that there is too little I can get about the current state of the hardware. Remembering about the mess of the temporary debug code I had in the previous projects and the inability of the synthesis tool to directly access the qualified names of the signals inside sub-modules, I implemented rather simple debug infrastructure that uses a single register ring (like a simplified JTAG) through all the modules to debug and a matching Python code that allows access to individual bit fields of the ring. Design includes a single debug_master and debug_slave modules in each of the design module instances that needs debugging (and the modules above – up to the top one). By the time the camera was able to generate correct images the total debug ring consisted of almost a hundred of the 32-bit registers, when I later disabled this debug functionality by commenting out a single `define DEBUB_RING macro it recovered almost 5% of the device slices. The program output looks like:
    x393 +0.001s--> print_debug 0x38 0x3e
    038.00: compressors393_i.jp_channel0_i.debug_fifo_in [32] = 0x6e280 (451200)
    039.00: compressors393_i.jp_channel0_i.debug_fifo_out [28] = 0x1b8a0 (112800)
    039.1c: compressors393_i.jp_channel0_i.dbg_block_mem_ra [ 3] = 0x3 (3)
    039.1f: compressors393_i.jp_channel0_i.dbg_comp_lastinmbo [ 1] = 0x1 (1)
    03a.00: compressors393_i.jp_channel0_i.pages_requested [16] = 0x26c2 (9922)
    03a.10: compressors393_i.jp_channel0_i.pages_got [16] = 0x26c2 (9922)
    03b.00: compressors393_i.jp_channel0_i.pre_start_cntr [16] = 0x4c92 (19602)
    03b.10: compressors393_i.jp_channel0_i.pre_end_cntr [16] = 0x4c92 (19602)
    03c.00: compressors393_i.jp_channel0_i.page_requests [16] = 0x4c92 (19602)
    03c.10: compressors393_i.jp_channel0_i.pages_needed [16] = 0x26c2 (9922)
    03d.00: compressors393_i.jp_channel0_i.dbg_stb_cntr [16] = 0xcb6c (52076)
    03d.10: compressors393_i.jp_channel0_i.dbg_zds_cntr [16] = 0xcb6c (52076)
    03e.00: compressors393_i.jp_channel0_i.dbg_block_mem_wa [ 3] = 0x4 (4)
    03e.03: compressors393_i.jp_channel0_i.dbg_block_mem_wa_save [ 3] = 0x0 (0)

    Acquiring the first images

    All the problems I encountered while trying to make hardware work turned out to be reproducible (but no always easy) with the simulation and the next 3 weeks I was eliminating then one by one. When I’ve got to the 51-st version of the FPGA bitstream file (there were several more when I forgot to increment version number) camera started to produce consistently valid JPEG files.

    First 4-sensor image acquired with NC393 camera

    First 4-sensor image acquired with NC393 camera

    At that point I replaced a single sensor front end with no lens attached (just half of the input sensor window was covered with a tape to produce a blurry shadow in the images) with four complete SFE with lenses simultaneously using a piece of Eyesis4π hardware to point the individual sensors at the 45° angles (in portrait mode) covering 180°×60° FOV combined – it resulted in the images shown above. Sensor color gains are not calibrated (so there is visible color mismatch) and the images are not stitched together (just placed side-by-side) but i consider it to be a significant milestone in the NC393 camera development.

    SATA controller status

    Almost at the same time Alexey who is working on SATA controller for the camera achieved an important milestone too. His code running in Xilinx Zynq was able to negotiate and establish link with an mSATA SSD connected to the NC393 prototype. There is still a fair amount of design work ahead until we’ll be able to use this controller with the camera, but at least the hardware operation of this part of the design is verified now too.

    What is next

    Having all the hardware on the 10393 verified we are now able to implement minor improvements and corrections to the 3 existing boards of the NC393 camera:

    • 10393 itself
    • 10389 – extension board with mSATA SSD, eSATA/USB combo connector, micro-USB and synchronization I/O
    • 10385 – power supply board

    And then make the first batch of the new cameras that will be available for other developers and customers.
    We also plane to make a new sensor board with On Semiconductor (former Aptina, former Micron) MT9F002 – 14MPix sensor with the same 1/2.3″ image format as the MT9P006 used with the current NC353 cameras. This 12-bit sensor will allow us to try multi-lane high speed serial interface keeping the same physical dimension of the sensor board and use the same lenses as we use now.

    by andrey at September 18, 2015 05:38 PM

    September 13, 2015

    Bunnie Studios

    Name that Ware, September 2015

    The Ware for September 2015 is shown below.

    This is a little something I was gifted at Burning Man this year. I wore it around my neck for a week and then brought it back to my lab in Singapore and tore it apart. Obviously, it suffered some kind of severe trauma. I’m particularly enamored with the way the silicon melted — instead of revealing crystalline facets at the former wirebond pads, a smooth, remodeled and rather amorphous surface is revealed with rivulets of silicon radiating from the craters. Now that’s hot!

    by bunnie at September 13, 2015 09:05 AM

    Winner, Name that Ware August 2015

    Last month’s ware is a controller board for a cutting machine, made by Polar-Mohr. The specific part number printed on the board is Polar SK 020162, which I’m guessing corresponds with this machine. Henry Valta pretty much nailed it, by guessing it as a Baum SK66 cutting circuit board. I’m not quite sure what the relationship is between Baumfolder and Polar-Mohr corporation, but it seems to be close enough that they share controller boards. Congrats, email me for your prize!

    I do have to give a shout-out to zebonaut for noting the use of “V” designators for discrete semiconductors and linking it to German/DIN-compliant origins. I’m pretty good at picking out PCBs made by Japanese manufacturers, and this little factoid will now help me identify PCBs of EU/German design origin.

    by bunnie at September 13, 2015 09:04 AM

    September 12, 2015

    Free Electrons

    The quest for Linux friendly embedded board makers

    Beagle Bone Black boardWe used to keep a list of Linux friendly embedded board makers. When this page was created in the mid 2000s, this page was easy to maintain. Though more and more products were created with Linux, it was still difficult to find good hardware platforms that were supported by Linux.

    So, to help community members and system makers selecting hardware for their embedded Linux projects, we compiled a first selection of board makers that were meeting the below criteria:

    • Offering attractive and competitive products
    • At least one product supported Free Software operating systems (such as Linux, eCos and NetBSD.
    • At least one product meeting the above requirements, with a public price (without having to register), and still available on the market.
    • Specifications and documentation directly available on the website (no registration required). Engineers like to study their options on their own without having to share their contact details with salespeople who would then chase them through their entire life, trying to sell inappropriate products to them.
    • Website with an English version.

    In the beginning, this was enough to reduce the list to 10-20 entries. However, as Linux continued to increase in popularity, and as hardware platform makers started to understand the value of transparent pricing and technical documentation, the criteria were no longer sufficient to keep the list manageable.

    Therefore, we added another prerequisite: at least one product supported (at least partially) in the official version of the corresponding Free Software operating system kernel. This was a rather strong requirement at first, but only such products bring a guarantee for long term community support, making it much easier to develop and maintain embedded systems. Compare this with hardware supporting only a very old and heavily patched Linux kernel, for example, which software can only be maintained by its original developers. This also reveals the ability of the hardware vendor to work with the community and share technical information with its users and developers.

    Then, with the development of low-cost community boards, and chip manufacturers efforts to support their hardware in the mainline Linux kernel, the list again became difficult to maintain.

    The next prerequisite we could add is the availability as Open-source hardware, allowing customers to modify the hardware according to their needs. Of course, hardware files should be available without registration.

    However, rather than keeping our own list, the best is to contribute to Wikipedia, which has a dedicated page on Open-Source computing hardware. At least, all the boards we could find are listed there, after adding a few.

    Don’t hesitate to post comments to this page to share information about hardware which could be worth adding to this Wikipedia page!

    Anyway, the good news is that Linux and Open-Source friendly hardware is now easier and easier to find than it was about 10 years back. Just have a preference for hardware that is supported in the mainline Linux kernel sources, or at least from a maker with earlier products which are already supported. A git grep -i command in the sources will help.

    by Michael Opdenacker at September 12, 2015 05:21 PM

    September 06, 2015

    Video Circuits

    DIY video VCO

    Here are some shots of early XR2206 based video VCO experiments, the important thing with video is getting sync pulses from your SPG in to a format that your oscillator circuit wants to sync to, some are fine with narrow pulses some want a nice clean saw wave or for the pulse to hit a certain voltage threshold. This means if you don't have the skills to attempt at modifying whatever SPG or VCO you have chosen you will need sync conditioning circuits to sit in between getting the two to talk nicely.




    by Chris (noreply@blogger.com) at September 06, 2015 09:43 AM

    August 31, 2015

    Free Electrons

    Linux 4.2 released, Free Electrons contributions inside

    Adelie Penguin
    Linus Torvalds has released last sunday the 4.2 release of the Linux kernel. LWN.net covered the merge window of this 4.2 release cycle in 3 parts (part 1, part 2 and part 3), giving a lot of details about the new features and important changes.

    In a more recent article, LWN.net published some statistics about the 4.2 development cycle. In those statistics, Free Electrons appears as the 10th contributing company by number of patches with 203 patches integrated, and Free Electrons engineer Maxime Ripard is in the list of most active developers by changed lines, with 6000+ lines changed. See also http://www.remword.com/kps_result/ for more kernel contribution statistics.

    This time around, the most important contributions of Free Electrons where:

    • Support for Atmel ARM processors:
      • The effort to clean-up the arch/arm/mach-at91/ continued, now that the conversion to the Device Tree and multiplatform is completed. This was mainly done by Alexandre Belloni.
      • Support for the ACME Systems Arietta G25 was added by Alexandre Belloni.
      • Support for the RTC on at91sam9rlek was also added by Alexandre Belloni.
      • Significant improvements were brought to the dmaengine xdmac and hdmac drivers (used on Atmel SAMA5D3 and SAMA5D4), bringing interleaved support, memset support, and better performance for certain use cases. This was done by Maxime Ripard.
    • Support for Marvell Berlin ARM processors:
      • In preparation to the addition of a driver for the ADC, an important refactoring of the reset, clock and pinctrl driver was done by using a regmap and the syscon mechanism to more easily share the common registers used by those drivers. Worked done by Antoine Ténart.
      • An IIO driver for the ADC was contributed, which relies on the syscon and regmap mentioned above, as the ADC uses registers that are mixed with the clock, reset and pinctrl ones.
      • The Device Tree files were relicensed under GPLv2 and X11 licenses.
    • Support for Marvell EBU ARM processors:
      • A completely new driver for the CESA cryptographic engine was contributed by Boris Brezillon. This driver aims at replacing the old mv_cesa drivers, by supporting the newer features of the cryptographic engine available in recent Marvell EBU SoCs (DMA, new ciphers, etc.). The driver is backward compatible with the older processors, so it will be a full replacement for mv_cesa.
      • A big cleanup/verification work was done on the pinctrl drivers for Armada 370, 375, 38x, 39x and XP, leading to a number of fixes to pin definitions. This was done by Thomas Petazzoni.
      • Various fixes were made (suspend/resume improvements, big endian usage, SPI, etc.).
    • Support for the Allwinner ARM processors:
      • Support for the AXP22x PMIC was added by Boris Brezillon, including the support for the regulators provided by this PMIC. This PMIC is used on a significant number of Allwinner designs.
      • A small number of Device Tree files were relicensed under GPLv2 and X11 licenses.
      • A big cleanup of the Device Tree files was done by using more aggressively the “DT label based syntax”
      • A new driver, sunxi_sram, was added to support the SRAM memories available in some Allwinner processors.
    • RTC subsystem:
      • As was announced recently, Free Electrons engineer Alexandre Belloni is now the co-maintainer of the RTC subsystem. He has set up a Git repository at https://git.kernel.org/cgit/linux/kernel/git/abelloni/linux.git/ to maintain this subsystem. During the 4.2 release cycle, 46 patches were merged in the drivers/rtc/ directory: 7 were authored by Alexandre, and all other patches (with the exception of two) were merged by Alexandre, and pushed to Linus.

    The full details of our contributions:

    by Thomas Petazzoni at August 31, 2015 08:53 PM

    Video Circuits

    How Video Post-Production Effects were done in the 80s

    Continuing the theme of effects videos, here is a nice one about 80s era video effects.

    by Chris (noreply@blogger.com) at August 31, 2015 07:54 AM

    August 19, 2015

    Bunnie Studios

    Name that Ware August 2015

    The Ware for August 2015 is shown below.

    I found this kicking around in the South China Material market this past June. It is indeed a production board (and still in use today!), so there is a definitive answer to this month’s challenge sitting somewhere in the cloud. The extensive use of CD4000 series CMOS chips in this board brings a little grin to my face — haven’t seen one of those in ages (except for the CD4066, which is still pretty handy even in contemporary situations).

    Also, as a bonus, I found this in the same shop. This one isn’t for guessing, just for looking at. I’m a fan of FANUC.

    As an administrative note, images from this site and the kosagi wiki, and a few other miscellaneous services, will be off-line for a bit on September 2nd. There’s maintenance work scheduled on the power grid at my flat, and so my servers will be brought off-line. If all goes well, it’ll be just 15 minutes. However, if the mains breaker to my unit doesn’t automatically reset, it could be up to a few hours before someone can get to it. I’ll be somewhere in Black Rock City, far from the Internet, while this all goes down…so if something really unfortunate happens, it could be a week before things get restored from backups.

    by bunnie at August 19, 2015 10:31 AM

    Winner, Name that Ware July 2015

    The Ware for July 2015 was a bootlegged version of CAPCOM’s Carrier Air Wing. Congrats to pdw for nailing it, email me for your prize!

    And a big thanks to Felipe Sanches for contributing last month’s ware and helping to judge the winner.

    by bunnie at August 19, 2015 10:31 AM

    August 16, 2015

    Video Circuits

    Video Screening in Tokyo

    Alex organised a great screening in Tokyo check out the flyer




    by Chris (noreply@blogger.com) at August 16, 2015 07:16 AM

    August 10, 2015

    ZeptoBARS

    LM319M : weekend die-shot

    LM319M - "high speed" (80ns) dual comparator.
    Die size 2017x700 µm.


    August 10, 2015 05:09 AM

    August 03, 2015

    Free Electrons

    Free Electrons talks at the Embedded Linux Conference Europe

    Father Mathew BridgeThe Embedded Linux Conference Europe 2015 will take place on October 5-7 in Dublin, Ireland. As usual, the entire Free Electrons engineering team will participate to the event, as we believe it is one of the great way for our engineers to remain up-to-date with the latest embedded Linux developments and connect with other embedded Linux and kernel developers.

    The conference schedule has been announced recently, and a number of talks given by Free Electrons engineers have been accepted:

    We submitted other talks that got rejected, probably since both of them had already been given at the Embedded Linux Conference in California: Maxime Ripard’s talk on dmaengine and Boris Brezillon’s talk on supporting MLC NAND (which we regret since Boris is currently actively working on this topic, so we are expecting to have some useful results by the time of ELCE, compared to his ELC talk which was mostly a presentation of the issues and some proposals to address them). Interested readers can anyway watch those talks and/or read the slides.

    In addition to the Embedded Linux Conference Europe itself:

    • Thomas Petazzoni will participate to the Buildroot developers meeting on October 3/4, right before the conference.
    • Alexandre Belloni will participate to the OEDEM, the 2015 OpenEmbedded Developer’s European Meeting, taking place on October 9 after the conference.

    by Thomas Petazzoni at August 03, 2015 12:08 PM

    July 29, 2015

    Elphel

    NC393 progress update and a second life of the NC353 FPGA code

    Another update on the development of the NC393 camera: finished adding FPGA code that re-implements functionality of the NC353 camera (just with additional multi-sensor capability), including JPEG/JP4 compressors, IMU/GPS logger and inter-camera synchronization. Next step – simulation and debugging, and it will use co-simulating of the same sensor image data using the code of the existing NC353 camera. This involves updating of that camera code to the state compatible with the development tools we use, and so the additional sub-project was spawned.

    Verilog code development with VDT plugin for Eclipse IDE

    Before describing the renovation of the NC353 camera FPGA code I need to tell about the software we use for the last year. Living in the world where FPGA chip manufactures have monopoly (or duopoly as there are 2 major players) on the rather poor software tools, I realize that this will not change in the short term. But it is possible to constrain those proprietary creations in the designated “cages” letting them do only certain tasks that require secret knowledge of the chip internals, but do not let them take control of the whole development process, depend on them abandoning one software environment and introducing another half-made one as soon as you’ll get used to the previous.

    This is what VDT is about – it uses one of the most standard development environments – Eclipse IDE, combines it with a heavily modified version of VEditor and the Tool Specification Language that allows developers to integrate additional tools without getting inside the plugin code itself. Integration involves writing tool descriptions in TSL (this work is based on the tool manufacturer manual that specifies command options and parameters) and possibly creating custom parsers for the tool output – these programs may be written in any programming language developer is comfortable with.

    Current integration includes the Free Software simulation programs (such as Icarus Verilog with GtkWave). As it is safe to rely on the Free Software we may add code specific to these programs in the plugin body to have deeper integration and combine code and waveforms navigation, breakpoints support.

    For the FPGA synthesis and implementation tools this software supports Xilinx ISE and Vivado, we are now working on Altera Quartus too. There is no VDT code dependence on the specifics of each of these tools, and the tools are connected to the IDE using ssh and rsync, so they do not have to run on the same workstation.

    Renovating the NC353 camera code

    Initially I just planned to enter the NC353 camera FPGA code into VDT environment for simulation. When I opened it in this IDE it showed more than 200 warnings in the code. Most were just unused wires/registers and signal width mismatch that did not impact the functioning of the camera, but at least one was definitely a bug – a one that gets control in very rare occasions and so is difficult to catch.

    When I fixed most of these warnings and made sure simulation works, I decided to try to run ISE 14.7 tools and generate a functional bitstream. There were multiple incompatibilities between ISE 10 (which was last used to generate a bitstream) and the current version – most modifications were needed to change description of the I/O standard and other parameters of the device pins (from constraint file and “// synthesis attribute …” in the code to modern style of using parameters.

    That turned out to be doable – first I made the design agree with all the tools to the very last (bitstream generation), then reconciled the generated pad report with the one generated with old tools (there are still some differences remaining but they are understandable and OK). Finally I had to figure out that I need to turn on non-default option to use timing constraints and how to change the speed grade to match the one used with the old tools, and that resulted in a bitstream file that I tested on just one camera and got images. It was a second attempt – the first one resulted in a “kernel panic” and I had to reflash the camera. The project repository has the detailed description how to make such testing safe, but it is still better to try using your modified FPGA code only if you know how to “unbrick” the camera.

    We’ll do more testing of the bit files generated by the ISE 14.7, but for now we need to focus on the NC393 development and use NC393 code as a reference for simulation.

    Back to NC393

    Before writing simulation test code for the NC393 camera, I made the code to pass all the Vivado tools and result in a bitfile. That required some code tweaking, but finally it worked. Of course there will be some code change to fix bugs revealed during verification, but most likely changes will not be radical. This assumption allows to see the overall device utilization and confirm that the final design is going to fit.

    Table 1. NC393 FPGA Resources Utilization
    Type Used Available Utilization(%)
    Slice 14222 19650 72.38
    LUT as Logic 31448 78600 40.01
    LUT as Memory 1969 26600 7.40
    LUT Flip Flop Pairs 44868 78600 57.08
    Block RAM Tile 78.5 265 29.62
    DSPs 60 400 15.00
    Bonded IOB 152 163 93.25
    IDELAYCTRL 3 5 60.00
    IDELAYE2/IDELAYE2_FINEDELAY 78 250 31.20
    ODELAYE2/ODELAYE2_FINEDELAY 43 150 28.67
    ILOGIC 72 163 44.17
    OLOGIC 48 163 29.45
    BUFGCTRL 16 32 50.00
    BUFIO 1 20 5.00
    MMCME2_ADV 5 5 100.00
    PLLE2_ADV 5 5 100.00
    BUFR 8 20 40.00
    MAXI_GP 1 2 50.00
    SAXI_GP 2 2 100.00
    AXI_HP 3 4 75.00
    AXI_ACP 0 1 0.00

    One AXI general purpose master port (MAXI_GP) and one AXI “high performance” 64-bit slave port are reserved for the SATA controller, and the 64-bit cache-coherent port (AXI_ACP) will be used for CPU accelerators for the multi-sensor image processing.

    Next development step will be simulation and debugging of the project code, and luckily large part of the code can be verified by comparing with the older NC353

    by andrey at July 29, 2015 07:59 AM

    July 19, 2015

    Bunnie Studios

    Name that Ware, July 2015

    The Ware for July 2015 is shown below:

    Ahh…hardware from the 80’s/early 90’s. My favorite era, when circuit board traces were laid out freehand using pen or tape and 74-series logic gates were still a thing. Thanks to Felipe Sanches for providing the ware, and to xobs for taking the photos while he was in Brazil for his keynote at FISL16!

    Sorry for the lack of updates on this blog, it’s been a busy summer. To get a whiff of what I’ve been up to, check out my article in Wired Magazine on trends enabling the decentralization of innovation in hardware and Jinjoo’s blog-in-progress on the manufacturing bootcamp I held this summer in Shenzhen for MIT Media Lab students, which also happened to be the inaugural application of our new Orchard IoT Platform.

    by bunnie at July 19, 2015 03:12 PM

    Winner Name that Ware June 2015

    The Ware for June 2015 is, in fact, an HV supply for driving an X-ray tube, and during normal operation it’s immersed in oil. I’ll give the prize to Matt Sieker, for being the first to correctly guess the ware.

    Interesting that so many people found it to be “obviously” an HV supply for an X-ray tube — first time I had ever seen one! I found the construction details of the high voltage transformers to be interesting. Certainly a domain in which I have little direct design expertise.

    by bunnie at July 19, 2015 03:11 PM

    July 11, 2015

    ZeptoBARS

    Mikron 1663RU1 - first Russian 90nm chip : weekend die-shot

    Mikron is currently the most advanced microelectronic fab in Russia, located in Zelenograd. In 2010 they have licensed 90nm technology from STMicroelectronics, and equipment setup was somewhat ready by the end of 2012. Technology transfer was hindered by very small manufacturing volume and scarce funding. Nevertheless, 1663RU1 has became their first 90nm product reached commercial customers. It's 16 Mibit SRAM chip.

    There are no redundancy or ECC correction on this chip, bulk-Si technology ("civilian" technology). There are no radiation-hardening tricks implemented. This chip is apparently intended for industrial/military applications, use in space is only possible with great care.






    After metalization etch. Each small square is a matrix of 64x128 bit, 16 Mibit total.


    Finally, SRAM cells itself. Cell area is 1.2 µm2, which is average level for 90nm tech (best ones are 1um2). Scale is 1px=57nm.


    For comparison 180nm SRAM from ST Microelectronics in the same scale (STM32F100C4T6B).


    If we take a look at the piece where bits of first metal preserved, we can see that Mikron is using litho-friendly SRAM design, where critical levels only use straight lines.


    Here is Andrew Zonenberg's suggestion on 6T SRAM cell layout:


    Die size 5973x6418 µm.

    July 11, 2015 12:53 PM

    July 10, 2015

    Elphel

    GTX_GPL – Free Software Verilog module to simulate a proprietary FPGA primitive

    Widespread high-speed protocols, which are based on serial interfaces, have become easier and easier to implement on FPGAs. If you take a look at Xilinx’s chips series, you can monitor an evolution of embedded transceivers from some awkwardly inflexible models to much more compatible ones. Nowadays even the affordable 7 series FPGAs possess GTX transceivers. Basically, they represent a unification of various protocols phy-levels, where the versatility is provided by parameters and control input signals.
    The problem is, for some reason GTX’s simulation model is a secured IP block. It means that without proprietary software it’s impossible to compile and simulate the transceiver. Moreover, we use Icarus Verilog for these purposes, which doesn’t provide deciphering capabilities for now, and doesn’t seem to ever be able to do so: http://sourceforge.net/p/iverilog/feature-requests/35/

    Still, our NC393 camera has to use GTX as a part of SATA host controller design. That’s why it was decided to create a small simulation model, which shall behave as GTX, at least within some limitation and assumption. This was done so that we could create a full-fledged non-synthesizable verification environment and provide our customers with a universal within simulation purposes solution.

    The project itself can be found at github. The implementation is still crude and contains only the bare minimum required to achieve our goals. However, it assumes a possibility to be broadened onto another protocol’s case. That’s why it preserves the original GTX structure, as it’s presented in Xilinx’s “7 Series FPGAs GTX/GTH Transceivers User Guide v1.11″, also known as UG476: http://www.xilinx.com/support/documentation/user_guides/ug476_7Series_Transceivers.pdf
    The overall design of the so called GTX_GPL is split into 4 parts, contained in a wrapper to ensure interface compatibility with the original GTX. These parts are: TX – transmitter, RX – receiver, channel clocking and common clocking.
    All of the clocking scheme was based on an assumption of clocks, PLLs, and interconnects being ideal, so no setup/hold violation/metastability are expected. That itself makes the design non-synthesizable, but greatly reduces its complexity.

    RX - Receiver

    Receiver + Clocking

    TX - Transmitter

    Transmitter + Clocking

    Transmitter and receiver schemes are presented in the figures. Each is provided with a clocking mechanism. You can compare it to GTX’s corresponding schemes (see UG476, pages 107, 133, 149, 169). As you can see, TX and RX lack the original functional blocks. However, many of them are important only for synthesis or precise post-synthesis simulation, like phase adjustments or analog-level blocks. Some of them (like the gearbox) are excessive for SATA and implementing them can be costly.
    Despite all of that, current implementation passes some basic tests when SATA parameters are turned on. Resulting waves were compared to ones received by swapping GTX_GPL with the original GTX_CHANNEL primitive as a device-under-test, and they showed more or less the same behavior.

    You can access to a current version via github. It’s not necessary to clone or download the whole repository, but enough to acquire ‘GTXE2_CHANNEL.v’ file from there. This file represents a collection of all necessary modules from the repository, with GTXE2_CHANNEL as a top. After including (or linking as a lib file/source file) it in your project, the original unisims primitive GTXE2_CHANNEL.v will be overridden.

    If you find some bugs during simulation in SATA context or you want some features to be implemented (within any protocol’s set-up), feel free to leave a message via comments, PM or github.

    Overall, the design shall be useful for verification purposes. It allows to create a proper GPL licensed simulation verification environment which is not hard-bound to a proprietary software.

    by Alexey at July 10, 2015 03:04 AM

    July 01, 2015

    Bunnie Studios

    Name that Ware, June 2015

    The Ware for June 2015 is shown below.

    Thanks tho Dan Scherer for contributing this ware! I don’t have a specific make/model number for it, but a general idea of what it’s for, so I’ll try my best to judge the submissions given partial information.

    by bunnie at July 01, 2015 02:57 AM

    Winner, Name that Ware May 2015

    The Ware for May 2015 is a DVB antenna amplifier. The brand/model number is Draco-HDT2-7300. Lots of excellent submissions and in an act of total arbitrary judgment I’ll say pelrun is the winner for calling it as an amplified TV antenna first. Gratz, email me for your prize!

    by bunnie at July 01, 2015 02:56 AM

    June 29, 2015

    ZeptoBARS

    BFG135 - NPN 7GHz RF BJT transistor : weekend die-shot

    BFG135 - 7GHz RF NPN transistor with integrated emitter-ballasting resistors. Transistor is so sparse to lower thermal (mainly) and collector resistance.
    Die size 668x538 µm, transistor fin halfpitch - 800nm.



    Closer look:

    June 29, 2015 12:07 AM

    June 28, 2015

    Video Circuits

    Rob Schafer and Donny Blank - Interview from 1983 - Historical look at Video Synthesis



    "Rob Schafer and Donny Blank - Interview from 1983 on the video synthesizer.
    Posted by Video 4 ( Synopsis Video) - Denise Gallant"

    by Chris (noreply@blogger.com) at June 28, 2015 03:29 AM

    June 24, 2015

    ZeptoBARS

    nRF51822 - Bluetooth LE SoC : weekend die-shot

    nRF51822 is a widely used Bluetooth LE SoC with Cortex-M0 core and on-chip buck DC-DC (LC are external).
    Die size 3833x3503 µm, ~180nm technology.


    June 24, 2015 10:02 AM

    June 21, 2015

    Video Circuits

    Dan Bucciano

    Dan Bucciano recently posted this fantastic clip to the discussion group so I thought I would share. It's some lovley black and white feedback processed with a color solarizer prototype he designed around 20 years ago.



    by Chris (noreply@blogger.com) at June 21, 2015 12:22 AM

    June 18, 2015

    Free Electrons

    Buildroot 2015.05 release, Free Electrons contributions inside

    Buildroot LogoThe Buildroot project has recently released a new version, 2015.05. With exactly 1800 patches, it’s the largest release cycle ever, with patches from more than 100 different contributors. It’s an impressive number, showing the growing popularity of Buildroot as an embedded Linux build system.

    The CHANGES file summarizes the most important improvements of this release.

    Amongst those 1800 patches, 143 patches were contributed by Free Electrons. Our most significant contributions for this release have been:

    • Addition of a package for the wf111 WiFi drivers. They allow to use a WiFi chip from Bluegiga, which is being used in one of our customer projects.
    • Addition of the support for using uClibc-ng. uClibc-ng is a “collaborative” fork of the uClibc project, which aims at doing more regular releases and have better testing. Maintained by Waldemar Brodkorb, the project has already seen several releases since its initial 1.0 release. Waldemar is merging patches from the original uClibc regularly, and adding more fixes. It allows Buildroot and other uClibc users to have well-identified uClibc stable versions instead of a 3 years old 0.9.33.2 version with dozens of patches on top of it. uClibc-ng is not currently used as the default uClibc version as of 2015.05, but it might very well be the case in 2015.08.
    • Important internal changes to the core infrastructure. Until this release, the make legal-info, make source, make external-deps and make source-check logic was relying only on the Buildroot configuration file. This was giving correct results for target packages which all have a corresponding Buildroot configuration option, but not for host packages (which for most of them don’t have Buildroot configuration options). Only a manual two-level dependency handling was done for the host packages for the above mentioned commands. With our work, the handling of those features has been moved to be part of the package infrastructure itself, so it’s using proper make recursivity to resolve the entire dependency tree. Due to this, the results of make legal-info or make external-deps may be longer following this release, but it’s because it’s now actually correct and complete. You can look at the patches for more details, but these changes are very deep into the core Buildroot infrastructure.
    • Large number of build fixes. We contributed 52 patches fixing issues detected by the autobuild infrastructure.
    • Addition of the imx-usb-loader package, which can be used to load over USB a new bootloader on i.MX6 platforms, even if the platform has no bootloader or a broken bootloader. We also use it as part of one of our customer projects.

    With 142 patches, Free Electrons engineer Thomas Petazzoni is the third contributor to this release by number of patches:

    git shortlog -s -n 2015.02..
    
       397	Bernd Kuhls
       393	Gustavo Zacarias
       142	Thomas Petazzoni
    

    But by far, our most important contribution by far for this release is Thomas acting as the interim maintainer: on the total of 1800 patches merged for this release, Thomas has been the committer of 1446 patches. He has therefore been very active in merging the patches contributed by the Buildroot community.

    There are already some very interesting goals set for the Buildroot 2015.08 release, as you can see on the Buildroot release goals page.

    Also, if you want to learn Buildroot in details, do not hesitate to look at our Buildroot training course!

    by Thomas Petazzoni at June 18, 2015 08:34 AM

    June 17, 2015

    Video Circuits

    Synapse by Christian Greuel



    "Christian GreuelFake Space Labs / CalArts (1992)

    An abstract work of visual music, “Synapse” is a stylized interpretation of the inner senses as they are lifted from a state of despondency to find temporary asylum in a delirious

    moment of lucidity. This mindscape takes us on a ride through the turbulence of the psyche using vintage real-time 3D graphics and electronic synthesis technology.

    The base graphics were created at Fake Space Labs during an Artist-in-Residency (1991-92) and repurposed for this work in 2003.

    Video and Music: Christian Greuel
    A/D Transfer: Aaron Ross (2003)
    Thanks to: Mark Bolas and Eric Gullichsen

    Graphics created at: Fake Space Labs (1992)
    Video processed at: California Institute of the Arts (1992)
    Music created at: California Institute of the Arts (1991)

    Software: Sense8 WorldToolKit 1.0, AutoCAD (3D models), ColoRIX (2D textures)
    Hardware: i386 PC (4MB RAM), DS1 DVI video card, CRT display, 3/4" video tape and camera
    Video Processing Hardware: Hearne/EAB Videolab, Fairlight CVI
    Audio: Roland SH-5 analog synthesizer, Ampex 456 4-track 1/4" analog tape"

    by Chris (noreply@blogger.com) at June 17, 2015 01:55 PM

    June 14, 2015

    Video Circuits

    McConnell Macro Video Synthesis System

    Here is something you don't see every day a Amiga Video Toaster/ Atari Falcon based video synthesizer with multiple other signal paths going on, thanks to Matthew McConnell for the upload!http://www.tecterran.com/sonovista  Not your standard analogue set up and much closer to systems from the mid 90s

    by Chris (noreply@blogger.com) at June 14, 2015 10:44 AM

    June 12, 2015

    Free Electrons

    Free Electrons engineer Alexandre Belloni co-maintainer of Linux Atmel processor support

    Atmel SAMA5After becoming the co-maintainer of the Linux RTC subsystem, Free Electrons engineer Alexandre Belloni also recently became a co-maintainer for the support of Atmel ARM processors in the Linux kernel.

    Free Electrons has been working since early 2014 with Atmel to improve support for their processors in the mainline kernel. Since this date, our work has mainly consisted in:

    • Modernizing existing code for Atmel processors: complete the switch to the Device Tree and the common clock framework for all platforms, rework all that was needed to make Atmel processor support compatible with the ARM multiplatform kernel, and do a lot of related driver and platform refactoring.
    • Implement a complete DRM/KMS driver for the display subsystem of the most recent Atmel processors.
    • Upstream support for the Atmel SAMA5D4, the latest Cortex-A5 based SoC from Atmel.

    Thanks to this long-term involvement from Alexandre Belloni and Boris Brezillon, Alexandre was appointed as a co-maintainer of Atmel support, replacing Andrew Victor who hasn’t been active in kernel development since quite some time. He is joining Nicolas Ferre and Jean-Christophe Plagniol-Villard in the team of maintainers for the Atmel platform.

    Alexandre has sent his first pull request as an Atmel co-maintainer on May 22, sending 9 patches to the ARM SoC maintainers, planned for the 4.2 kernel release. His pull request was quickly merged by ARM SoC maintainer Arnd Bergmann.

    Free Electrons is proud to have one of its engineers as the maintainer of one very popular embedded Linux platform, which has had since many years a strong commitment of upstream Linux kernel support. Alexandre is the third Free Electrons engineer to become an ARM platform maintainer: Maxime Ripard is the maintainer of Allwinner ARM processor support, and Gregory Clement is the co-maintainer of Marvell EBU ARM processor support.

    by Thomas Petazzoni at June 12, 2015 12:24 PM

    June 11, 2015

    Free Electrons

    Embedded Linux Projects Using Yocto Project Cookbook

    Embedded Linux Projects Using Yocto Project Cookbook Cover

    We were kindly provided a copy of Embedded Linux Projects Using Yocto Project Cookbook, written by Alex González. It is available at Packt Publishing, either in an electronic format (DRM free) or printed.

    It is written as a cookbook so it is a set of recipes that you can refer to and solve your immediate problems instead of reading it from cover to cover. While, as indicated by the title, the main topic is embedded development using Yocto Project, the book also includes generic embedded Linux tips, like debugging the kernel with ftrace or debugging a device tree from U-Boot.

    The chapters cover the following topics:

    • The Build System: an introduction to Yocto Project.
    • The BSP Layer: how to build and customize the bootloader and the Linux kernel, plenty of tips on how to debug kernel related issues.
    • The Software layer: covers adding a package and its configuration, selecting the initialization manager and making a release while complying with the various licenses.
    • Application development: using the SDK, various IDEs (Eclipse, Qt creator), build systems (make, CMake, SCons).
    • Debugging, Tracing and Profiling: great examples and tips for the usage of gdb, strace, perf, systemtap, OProfile, LTTng and blktrace.

    The structure of the book makes it is easy to find the answers you are looking for and also explains the underlying concepts of the solution. It is definitively of good value once you start using Yocto Project.

    Free Electrons is also offering a Yocto Project and OpenEmbedded training course (detailed agenda) to help you start with your projects. If you’re interested, join one of the upcoming public training sessions, or order a session at your location!

    by Alexandre Belloni at June 11, 2015 10:07 AM

    June 10, 2015

    Elphel

    NC393 progress update: HDL code for sensor channels is ported or re-written

    Quick update: a new chunk of code is added to the NC393 camera FPGA project. It is a second (of three needed to match the existing NC353 functionality) major parts of the system after the memory controller is finished. This code is just written, it still has to be verified by the simulation first, and then by synthesizing and by running it on the actual hardware. We plan to do that when the third part – image compressors will be ported to the new system too. The added code deals with receiving data from the image sensors and pre-processing it before storing in the video memory. FPGA-based systems are very flexible and many other configurations like support of multi-lane serial interface sensors or using several camera ports to connect a single large high-speed sensor are possible and will be implemented later. The table below summarizes parameters of the current code only.

    Table 1. NC393 Sensor Connections and Pre-processing
    Feature Value
    Number of sensor ports 4
    Total number of multiplexed sensors 16
    Total number of multiplexed sensors with existing 10359 multiplexer board 12
    Sensor interface type (implemented in HDL) parallel 12 bits
    Sensor interface hardware compatibility parallel LVCMOS/serial differential 8 lanes + clock
    Sensor interface voltage levels programmable up to 3.3V
    Number of I²C sequencers 4 (1 per port)
    Number of I²C sequencers frames 16
    Number of I²C sequencers commands per frame 64
    I²C sequencers commands data width 16/8 bits
    Image data width stored 16/8 bits per pixel
    Gamma conversion regions per port 4
    Histograms: number of rectangular ROI (Regions of Interest) per port 4
    Histograms: number of color channels 4
    Histograms: number of bins per color 256
    Histograms: width per bin 18 or 32 bits
    Histograms: number of histograms stored per sensor 16

    Up to 4 sensor channel modules can be instantiated in the camera, one per each of the sensor ports. In most applications all ports will run at the same clock frequency, but each of them can use a different clock and so heterogeneous sensors can be attached if needed. Current modules support 12 bit parallel data (such as Aptina MT9P006 we currently use), 8-lane+clock serial differential interface will be added later.

    Sensor modules include programmable delay elements on each input line to optimize sampling of the data and a small FIFO to compensate for the phase variations between the system free running clocks and the sensor output clocks influenced by the sensors and optional multiplexer PLLs.

    Similarly to the NC353 sensor modules contain dedicated I²C sequencers. These sequencers allow to synchronize I²C commands sent to the sensors with the sensor frame sync signals, they also reduce response time requirements to the software – the commands to be issued can be scheduled ahead of time to be executed at the certain frame number.

    Each of the sensor channels is designed to be compatible with a sensor multiplexer, such as the 10359 used in the current Elphel multi-sensor cameras. These boards connect to three sensor boards and present themselves to the system as a single large sensor. Images are acquired simultaneously by all 3 imagers, one is immediately routed downstream and the two others are stored in the on-board memory. After the first image is transferred to the camera system board, data from the other two sensors is read from the memory and transferred in the same format as received from the sensors, so the system board receives data as if from the sensor with 3 times more lines. What is different in the NC393 camera code in comparison with NC353 is that now code is aware of the multiplexers and is able to apply different conversion to each sub-image and calculate histograms (used for autoexposure and white balance) for each sub-image. Current NC353 camera (and multisensor cameras based on the same design) have the same settings for the whole composite image coming from the multiplexer and have only one histogram window of interest.

    Channel modules and parameterized and can be fine-tuned for particular applications to reduce resource usage. For example, the histogram modules can be either 18 (sufficient in most cases) or full 32 bit wide, histogram data may be buffered (required only for sensor with very small vertical blanking when using full frame histogram WOI) or not buffered. Depending on these settings either 1 or two block RAM hard macros are instantiated.

    Histogram data generated from all 4 ports (from up to 16 sensors) is transferred to the system memory, and each of the 16 channels store data for the last 16 frames acquired. This multi-frame data storage eases timing requirements to the software that processes the histograms. This data is sent over the general purpose S_AXI_GP0 port. This medium-speed interface is quite adequate for this amount of data, high speed 64-bit wide AXI_HP* are reserved for the higher bandwidth image transfers.

    by andrey at June 10, 2015 02:54 AM

    June 09, 2015

    Free Electrons

    Embedded Linux and kernel job openings for 2015

    At Free Electrons, we are starting to get more and more requests for very cool projects. As it can be very frustrating to turn down very interesting opportunities (such as projects that allow us to contribute to the Linux kernel, Buildroot or Yocto Projects), we have decided to look for new engineers to join our technical team.

    Job description in a nutshell

    • Technical aspects: mainline Linux kernel development, Linux BSP and embedded Linux system integration, technical training
    • Location: working in one of our offices in France (Toulouse or Orange)
    • Contract: full-time, permanent French contract

    Mainline Linux kernel development

    Believe it or not, we now have an increasing number of customers contracting us to support their hardware in the mainline Linux kernel. They are either System on Chip manufacturers or systems makers, who now understand the strong advantages brought by mainline Linux kernel support to their customers and to themselves.

    You can see the results: Free Electrons is now consistently in the top 20 companies contributing to the Linux kernel. We are even number 6 for Linux 4.0!

    Note that this job doesn’t only require technical skills. It also has a strong social dimension, having to go through multiple iterations with the community and with kernel subsystem maintainers to get your code accepted upstream.

    Linux BSP and embedded Linux system integration

    Such activity involves developing and integrating everything that’s needed to deploy Linux on the customer hardware: bootloader, kernel, build environment (such as Buildroot or the Yocto project), upgrade system, optimizing performance (such as boot time) and fixing issues. Another way is to provide guidance and support to customer learning to do such a job.

    As opposed to Linux kernel development projects which are often long term ones (though with step by step objectives which can be reached in days), these are usually shorter and more challenging projects. They allow us to stay in touch with the real-life challenges that customer engineers face every day, and that require to achieve substantial results in a relatively small number of days.

    Such projects also constitute opportunities to contribute improvements to the mainline kernel and bootloader projects, as well to the build system projects themselves (Buildroot, Yocto Project, OpenWRT…).

    Training and sharing experience

    Knowledge sharing is an important part of Free Electrons mission and activity. Hence, another important aspect of the job is teaching, maintaining and improving Free Electrons training courses.

    You will also be strongly incited to share your technical experience by writing blog posts or kernel documentation, and by proposing talks at international conferences, especially the Embedded Linux Conference (USA, Europe).

    Profile

    • Experience: we are open to both experienced engineers and people just going out of engineering schools. Though prior experience with the technical topics will be an advantage, we are also interested in young engineers demonstrating great potential for learning, coding and knowledge sharing. People having made visible contributions in these areas will have an advantage too.
    • Language skills: fluency in oral and written English is very important. French speaking skills won’t be a requirement, but an advantage too.
    • Traveling: for training sessions and conference participation, you will need the ability to travel rather frequently, up to 8-10 times a year.
    • Ability to relocate, to one of our offices in France, either in Toulouse or in Orange, to strengthen our engineering teams here.

    Details about Toulouse and Orange

    • Toulouse is a dynamic city with lots of high-tech and embedded systems companies in particular. Our office in Colomiers can easily be reached by train from downtown Toulouse if you wish to settle there. You would be working with Boris Brezillon, Antoine Ténart, Maxime Ripard and our CTO Thomas Petazzoni.
    • Our main office is settled in Orange in the heart of the Provence region, close to Avignon, a smaller but dynamic city too. It enjoys a sunny climate and the proximity of the Alps and the Mediterranean sea. Accommodation is very affordable and there are no traffic issues! You would be working with our founder Michael Opdenacker and of course remotely with the rest of the engineering team. In particular, we are interested in foreign engineers who could help us develop our services in their home countries.

    We prefer not to offer home based positions for the moment, which have their own complexity and cost, while we have plenty of space left in our current offices.

    See a full description and details about how to contact us.

    by Michael Opdenacker at June 09, 2015 07:50 PM

    June 03, 2015

    ZeptoBARS

    RGB flicker LED : weekend die-shot


    Unlike previous LED, this one is completely deterministic: diodes differ slightly only in RC oscillator frequency (~±10%). Regular structure at the lower-left side suggests that it's some sort of microcode-driven design.

    Die size 553x474 µm, 1.5µm technology.

    Thanks for this interesting chip to ASIP department of Gomel State University.

    After metalization etch:

    June 03, 2015 12:56 AM

    Flicker LED : weekend die-shot

    Some might have seen candle flicker LED - their brightness is modulated randomly to mimic real candle. It is achieved by using digital die copackaged with red LED die in standard 5mm transparent case.



    This design is apparently using phase difference between 2 RC oscillators as source of random data. There are multiple designs in the wild, some other apparently based on LFSR with single oscillator. More on the topic: siliconpr0n.org, cpldcpu.wordpress.com, hackaday.com.

    Die size 580x476 µm, 3µm technology.

    Thanks for this interesting chip to ASIP department of Gomel State University.

    After metalization etch:

    June 03, 2015 12:19 AM

    June 02, 2015

    Free Electrons

    New training course on Buildroot: materials freely available

    Buildroot LogoLast year, Free Electrons launched a new training course on using the Yocto Project and OpenEmbedded to develop embedded Linux systems. In the selection of build system tools available in the embedded Linux ecosystem, another very popular choice is Buildroot, and we are happy to announce today that we are releasing a new 3 days training course on Buildroot!

    Free Electrons is a major contributor to the Buildroot upstream project, with more than 2800 patches merged as of May 2015. Our engineer Thomas Petazzoni alone has contributed more than 2700 patches. He has gathered an extensive knowledge of Buildroot and its internals, being one of the primary authors of the core infrastructures of Buildroot. He is a major participant to the Buildroot community, organizing the regular Buildroot Developer Days, supporting users through the mailing list and on IRC. Last but not least, Thomas acts as an interim maintainer when the main Buildroot maintainer is not available, an indication of Thomas strong involvement in the Buildroot project.

    In addition, Free Electrons has used and is using Buildroot in a significant number of customer projects, giving us an excellent view of Buildroot usage for real projects. This feedback has been driving some of our Buildroot contributions over the last years.

    The 3 days training we have developed covers all the aspects of Buildroot: basic usage and configuration, understanding the source and build trees, creating new packages including advanced aspects, analyzing the build, tips for organizing your Buildroot work, using Buildroot for application development and more. See the detailed agenda.

    buildroot-slidesWe can deliver this training course anywhere in the world, at your location (see our rates and related details). We have also scheduled a first public session in English in Toulouse, France, on November 30 to December 2. Contact us at training@free-electrons.com if you are interested.

    And finally, last but not least, like we do for all our training sessions, we are making the training materials freely available under a Creative Commons BY-SA license, at the time of the training announcement: the first session of this course is being given this week. For the Buildroot training, the available materials are:

    Our materials have already been reviewed by some of the most prominent contributors to Buildroot: Peter Korsgaard (Buildroot maintainer), Yann E. Morin, Thomas De Schampheleire, Gustavo Zacarias and Arnout Vandecappelle. We would like to take this opportunity to thank them for their useful comments and suggestions in the development of this new training course.

    by Thomas Petazzoni at June 02, 2015 08:51 PM

    May 30, 2015

    Bunnie Studios

    Name that Ware, May 2015

    The Ware for May 2015 is below.

    Thanks to xobs for contributing this ware!

    by bunnie at May 30, 2015 02:18 PM



    Subscribe To The Planet
    • Atom Feed
    • RSS 2.0 Feed
    • RSS 1.0 Feed
    • FOAF Subscriptions
    • OPML Subscriptions

    Planet Reads From

    interactive