copyleft hardware planet

April 18, 2014

Video Circuits

Jonathan Gillie

Here is some interesting video collage work from Jonathan Gillie using Tachyons+ gear to generate the video effects and then arranged in after effects



http://jonathangillieportfolio.tumblr.com/
http://jongillie.tumblr.com/

by Chris (noreply@blogger.com) at April 18, 2014 06:59 AM

April 15, 2014

Peter Zotov, whitequark

A guide to extension points in OCaml

Extension points (also known as “-ppx syntax extensions”) is the new API for syntactic extensions in OCaml. The old API, known as camlp4, is very flexible, but also huge, practically undocumented, lagging behind the newly introduced syntax in the compiler, and just overall confusing to those attempting to use it.

Extension points are an excellent and very simple replacement introduced by Alain Frisch. In this article, I will explain how to amend OCaml’s syntax using the extension points API.

Note that the features I describe in this article are so bleeding edge, it’ll need constant transfusions just to stay alive. The last transfusion, er, update, happened on 2014-04-17.

Update 2014-04-17: Camlp4 now works. Describe extension nodes properly. Make example more idiomatic. Add the section on packaging.

In order to use the extension points API, you’ll need a trunk compiler. As it already is not shipped with camlp4, you will need to install camlp4 separately. This all can be done with opam:

1
2
3
opam switch reinstall 4.02.0dev+trunk
opam remote add jpdeplaix git://github.com/jpdeplaix/opam-overlay
opam install camlp4 ocamlfind oasis

What is Camlp4?

At its core, camlp4 (P4 stands for Pre-Processor-Pretty-Printer) is a parsing library which provides extensible grammars. That is, it makes possible to define a parser and then, later, make a derived parser by adding a few rules to the original one. The OCaml syntax (two OCaml syntaxes, in fact, the original one and a revised one introduced specifically for camlp4) is just a special case.

When using camlp4 syntax extensions with OCaml, you write your program in a syntax which is not compatible with OCaml’s (neither original nor revised one). Then, the OCaml compiler (when invoked with the -pp switch) passes the original source to the preprocessor as text; when the preprocessor has finished its work, it prints back valid OCaml code.

There are a lot of problems with this approach:

  • It is confusing to users. Camlp4 preprocessors can define almost any imaginable syntax, so unless one is also familiar with all the preprocessors used, it is not in general possible to understand the source.

  • It is confusing to tools, for much the same reason. For example, Merlin has no plans to support camlp4 in general, and has implemented a workaround for few selected extensions, e.g. pa_ounit.

  • Writing camlp4 extensions is hard. It requires learning a new (revised) syntax and a complex, scarcely documented API (try module M = Camlp4;; in utop—the signature is 16255 lines long. Yes, sixteen thousand.)

  • It is not well-suited for type-driven code generation, which is probably the most common use case for syntax extensions, because it is hard to make different camlp4 extensions cooperate; type_conv was required to enable this functionality.

  • Last but not the least, using camlp4 prevents OCaml compiler from printing useful suggestions in error messages like File "ifdef.ml", line 17: This '(' might be unmatched. Personally, I find that very annoying.

What is the extension points API?

The extension points API is much simpler:

  • A syntax extension is now a function that maps an OCaml AST to an OCaml AST. Correspondingly, it is no longer possible to extend syntax in arbitrary ways.

  • To make syntax extensions useful for type-driven code generation (like type_conv), the OCaml syntax is enriched with attributes.

    Attributes can be attached to pretty much any interesting syntactic construct: expressions, types, variant constructors, fields, modules, etc. By default, attributes are ignored by the OCaml compiler.

    Attributes can contain a structure, expression or pattern as their payload, allowing a very wide range of behavior.

    For example, one could implement a syntax extension that would accept type declarations of form type t = A [@id 1] | B [@id 4] of int [@@id_of] and generate a function mapping a value of type t to its integer representation.

  • To make syntax extensions useful for implementing custom syntactic constructs, especially for control flow (like pa_lwt), the OCaml syntax is enriched with extension nodes.

    Extension nodes designate a custom, incompatible variant of an existing syntactic construct. They’re only available for expression constructs: fun, let, if and so on. When the OCaml compiler encounters an extension node, it signals an error.

    Extension nodes have the same payloads as attributes.

    For example, one could implement a syntax extension what would accept a let binding of form let%lwt (x, y) = f in x + y and translate them to Lwt.bind f (fun (x, y) -> x + y).

  • To make it possible to insert fragments of code written in entirely unrelated syntax into OCaml code, the OCaml syntax is enriched with quoted strings.

    Quoted strings are simply strings delimited with {<delim>| and |<delim>}, where <delim> is a (possibly empty) sequence of lowercase letters. They behave just like regular OCaml strings, except that syntactic extensions may extract the delimiter.

Using the extension points API

On a concrete level, a syntax extension is an executable that receives a marshalled OCaml AST and emits a marshalled OCaml AST. The OCaml compiler now also accepts a -ppx option, specifying one or more extensions to preprocess the code with.

To aid this, the internals of the OCaml compiler are now exported as the standard findlib package compiler-libs. This package, among other things, contains the interface defining the OCaml AST (modules Asttypes and Parsetree) and a set of helpers for writing the syntax extensions (modules Ast_mapper and Ast_helper).

I won’t describe the API in detail; it’s well-documented and nearly trivial (especially when compared with camlp4). Rather, I will describe all the necessary plumbing one needs around an AST-mapping function to turn it into a conveniently packaged extension.

It is possible, but extremely inconvenient, to pattern-match and construct the OCaml AST manually. The extension points API makes it much easier:

  • It provides an Ast_mapper.mapper type and Ast_mapper.default_mapper value:
1
2
3
4
5
6
7
8
9
10
11
12
13
type mapper = {
  (* ... *)
  expr: mapper -> expression -> expression;
  (* ... *)
  structure: mapper -> structure -> structure;
  structure_item: mapper -> structure_item -> structure_item;
  typ: mapper -> core_type -> core_type;
  type_declaration: mapper -> type_declaration -> type_declaration;
  type_kind: mapper -> type_kind -> type_kind;
  value_binding: mapper -> value_binding -> value_binding;
  (* ... *)
}
val default_mapper : mapper

The default_mapper is a “deep identity” mapper, i.e. it traverses every node of the AST, but changes nothing.

Together, they provide an easy way to use open recursion, i.e. to only handle the parts of AST which are interesting to you.

  • It provides a set of helpers in the Ast_helper module which simplify constructing the AST. (Unlike Camlp4, extension points API does not provide code quasiquotation, at least for now.)

    For example, Exp.tuple [Exp.constant (Const_int 1); Exp.constant (Const_int 2)] would construct the AST for (1, 2). While unwieldy, this is much better than elaborating the AST directly.

  • Finally, it provides an Ast_mapper.run_main function, which handles the command line arguments and I/O.

Example

Let’s assemble it all together to make a simple extension that replaces [%getenv "<var>"] with the compile-time contents of the variable <var>.

First, let’s take a look at the AST that [%getenv "<var>"] would parse to. To do this, invoke the OCaml compiler as ocamlc -dparsetree foo.ml:

1
let _ = [%getenv "USER"]
1
2
3
4
5
6
7
8
9
10
11
12
[
  structure_item (test.ml[1,0+0]..[1,0+24])
    Pstr_eval
    expression (test.ml[1,0+8]..[1,0+24])
      Pexp_extension "getenv"
      [
        structure_item (test.ml[1,0+17]..[1,0+23])
          Pstr_eval
          expression (test.ml[1,0+17]..[1,0+23])
            Pexp_constant Const_string("USER",None)
      ]
]

As you can see, the grammar category we need is “expression”, so we need to override the expr field of the default_mapper:

ppx_getenv.ml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
open Ast_mapper
open Ast_helper
open Asttypes
open Parsetree
open Longident
open Location

let getenv s = try Sys.getenv s with Not_found -> ""

let getenv_mapper argv =
  (* Our getenv_mapper only overrides the handling of expressions in the default mapper. *)
  { default_mapper with
    expr = fun mapper expr ->
      match expr with
      (* Is this an extension node? *)
      | { pexp_desc =
          Pexp_extension (
          (* Should have name "getenv". *)
          { txt = "getenv" },
          (* Should have a single structure item, which is evaluation of a constant string. *)
          PStr [{ pstr_desc =
                  Pstr_eval ({ pexp_loc  = loc;
                               pexp_desc =
                               Pexp_constant (Const_string (sym, None))}, _)}] )} ->
        (* Replace with a constant string with the value from the environment. *)
        Exp.constant ~loc (Const_string (getenv sym, None))
      (* Delegate to the default mapper. *)
      | x -> default_mapper.expr mapper x;
  }

let () = run_main getenv_mapper

This syntax extension can be easily compiled e.g. with ocamlbuild -package compiler-libs.common ppx_getenv.native.

You can verify that this produces the desirable result by asking OCaml to pretty-print the transformed source: ocamlc -dsource -ppx ./ppx_getenv.native foo.ml:

1
let _ = "whitequark"

Packaging

When your extension is ready, it’s convenient to build and test it with OASIS, and distribute via opam. This is not hard, but has a few gotchas.

The OASIS configuration I suggest is simple:

_oasis
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# (header...)
OCamlVersion: >= 4.02

Executable ppx_getenv
  Path:           lib
  BuildDepends:   compiler-libs.common
  MainIs:         ppx_getenv.ml
  CompiledObject: byte

Executable test_ppx_getenv
  Build$:         flag(tests)
  Install:        false
  Path:           lib_test
  MainIs:         test_ppx_getenv.ml
  BuildTools:     ppx_getenv
  BuildDepends:   oUnit (>= 2)
  CompiledObject: byte
  ByteOpt:        -ppx lib/ppx_getenv.byte

Test test_ppx_getenv
  Command:        $test_ppx_getenv

You may have noticed that I used CompiledObject: byte for our extension instead CompiledObject: best. This is to work around a drawback in OASIS, which makes it impossible to substitute the actual name of the extension executable in the ByteOpt: field lower, or specify different field values depending on the value of $is_native.

The basic opam package can be generated with oasis2opam; however, it currently produces incorrect syntax for the ocaml-version field. Replace it with ocaml-version: [ >= "4.02" ].

After installing, the extension executable will be placed into ~/.opam/<version>/bin.

It is currently not possible to engage the syntax extension via findlib at all. To use it in applications, the following myocamlbuild.ml rule will work:

myocamlbuild.ml
1
2
3
4
5
6
dispatch begin
  function
  | After_rules ->
    flag ["ocaml"; "compile"; "use_ppx_getenv"] (S[A"-ppx"; A"ppx_getenv"]);
  | _ -> ()
end

Conclusion

The extension points API is really nice, but it’s not as usable yet as it could be. Nevertheless, it’s possible to create and use extension packages without too much ugly workarounds.

References

If you are writing an extension, you’ll find this material useful:

Other than the OCaml sources, I’ve found Alain Frisch’s two articles (1, 2) on the topic extremely helpful. I only mention them now because they’re quite outdated.

April 15, 2014 11:53 PM

Bunnie Studios

Myriad RF for Novena

This is so cool. Myriad-RF has created a port of their wideband software defined radio to Novena (read more at their blog). Currently, it’s just CAD files, but if there’s enough interest in SDR on Novena, they may do a production run.

The board above is based on the Myriad-RF 1. It is a fully configurable RF board that covers all commonly used communication frequencies, including LTE, CDMA, TD-CDMA, W-CDMA, WiMAX, 2G and many more. Their Novena variant plugs right into our existing high speed expansion slot — through pure coincidence both projects chose the same physical connector format, so they had to move a few traces and add a few components to make their reference design fully inter-operable with our Novena design. Their design (and the docs for the transceiver IC) is also fully open source, and in fact they’ve one-upped us because they use an open tool (KiCad) to design their boards.

I can’t tell you how excited I am to see this. One of our major goals in doing a crowdfunding campaign around Novena is to raise community awareness of the platform and to grow the i.MX6 ecosystem. We can’t do everything we want to do with the platform by ourselves, and we need the help of other talented developers, like those at Myriad-RF, to unlock the full potential of Novena.

by bunnie at April 15, 2014 07:03 PM

April 14, 2014

Sebastien Bourdeauducq, lekernel.net

EHSM-2014 CFP

CALL FOR PARTICIPATION

Exceptionally Hard & Soft Meeting
pushing the frontiers of open source and DIY
DESY, Hamburg site, June 27-29 2014
http://ehsm.eu
@ehsmeeting

Collaboration between open source and research communities empowers open hardware to explore new grounds and hopefully deliver on the “third industrial revolution”. The first edition of the Exceptionally Hard and Soft Meeting featured lectures delivered by international makers, hackers, scientists and engineers on topics such as nuclear fusion, chip design, vacuum equipment machining, and applied quantum physics. Tutorials gave a welcoming hands-on introduction to people of all levels, including kids.

EHSM is back in summer 2014 for another edition of the most cutting-edge open source conference. This year we are proud to welcome you to an exceptional venue: DESY, Europe’s second-largest particle physics laboratory!

Previous EHSM lectures may be viewed at: http://ehsm.eu/2012/media.html

ATTEND WITHOUT PRESENTING:
Attendance is open to all curious minds.

EHSM is entirely supported by its attendees and sponsors. To help us make this event happen, please donate and/or order your ticket as soon as possible by visiting our website http://ehsm.eu.
Prices are:

  • 45E – student/low-income online registration
  • 95E – online registration
  • 110E – door ticket
  • 272E – supporter ticket, with our thanks and your name on the website.
  • 1337E – gold supporter ticket, with our thanks and your company/project logo on the website and the printed programme.

EHSM is a non-profit event where the majority of the budget covers speakers’ travel and transportation of exhibition equipment.

SPEAKERS: SUBMIT YOUR PRESENTATION
Is there a device in your basement that demonstrates violations of Bell’s inequalities? We want to see it in action. Are you starting up a company to build nuclear fusion reactors? Tell us about it. Does your open source hardware or software run some complex, advanced and beautiful scientific instruments? We are eager to learn about it. Do you have stories to tell about your former job manufacturing ultra high vacuum equipment in the Soviet Union? We want to hear about your experiences. Do you have a great design for a difficult open source product that can be useful to millions? Team up with the people who can help implement your ideas.

Whoever you are, wherever you come from, you are welcome to present technologically awesome work at EHSM. Travel assistance and visa invitation letters provided upon request. All lectures are in English.

This year, we will try to improve the conference’s documentation by publishing proceedings. When relevant, please send us a paper on
your presentation topic. We are OK with previously published work, we simply expect high quality and up-to-date content.

To submit your presentation, send a mail to team@ehsm.eu with typically the following information:

  • Your name(s). You can be anonymous if you prefer.
  • Short bio
  • Title of the presentation
  • Abstract
  • How much time you would like
  • Full paper (if applicable)
  • Links to more information (if available)
  • Contact information (e-mail + mobile phone if possible)
  • If you need us to arrange your trip:
  • Where you would be traveling from
  • If you need accommodation in Hamburg

We will again have an exhibition area where you can show and demonstrate your work – write to the same email address to apply for space. If you are bringing bulky or high-power equipment, make sure to let us know:

  • What surface you would use
  • What assistance you would need for equipment transport between your lab and the conference
  • If you need 3-phase electric power (note that Germany uses 230V/400V 50Hz)
  • What the peak power of your installation would be

Tutorials on any technology topic are also welcome, and may cater to all levels, including beginners and kids.

We are counting on you to make this event awesome. Feel free to nominate other speakers that you would like to see at the conference, too – just write us a quick note and we will contact them.

KEY INFORMATION:
Conference starts: morning of June 27th, 2014
Conference ends: evening of June 29th, 2014
Early registration fee ends: February 1st, 2014
Please submit lectures, tutorials and exhibits before: May 15th, 2014

Conference location:
DESY
Notkestrasse 85
22607 Hamburg, Germany

WE ARE LOOKING FORWARD TO WELCOMING YOU IN HAMBURG!
- EHSM e.V. <http://ehsm.eu>

by lekernel at April 14, 2014 07:42 PM

April 13, 2014

Peter Zotov, whitequark

XCompose support in Sublime Text

Sublime Text is an awesome editor, and XCompose is very convenient for quickly typing weird Unicode characters. However, these two don’t combine: Sublime Text has an annoying bug which prevents the xim input method, which handles XCompose files, from working.

What to do? If Sublime Text was open-source, I’d make a patch. But it is not. However, I still made a patch.

If you just want XCompose to work, then add the sublime-imethod-fix PPA to your APT sources, install the libsublime-text-3-xim-xcompose package, and restart Sublime Text. (That’s it!) Or, build from source if you’re not on Ubuntu.

However, if you’re interested in all the gory (and extremely boring) details, with an occasional animated gif, read on.

Hunting the bug

To describe the bug, I will first need to explain its natural environment. In Linux, a desktop graphics stack consists of an X11 server and an application using the Xlib library for drawing the windows and handling user input. When it was conceived, a top-notch UI looked like this:

The X11 protocol and Xlib library are quite high-level: originally, you were expected to send compact, high-level instructions over the wire (such as “fill a rectangle at (x,y,x’,y’)”) in order to support thin clients over slow networks. However, thin clients and mainframes vanished, and in their place came a craving for beautiful user interfaces; and X11 protocol, primitive as it is, draws everything as if it came from 1993. (It is also worth noting that X went from X1 to X11 in three years, and has not changed since then.)

The Compose key and XCompose files are a remnant of that era. Xlib has a notion of input method; that is, you would feed raw keypresses (i.e. the coordinates of keys on the keyboard) to Xlib and it would return you whole characters. This ranged from extremely simple US input method (mapping keys to characters 1:1) to more complex input methods for European languages (using a dedicated key to produce composite characters like é and ç) to very intricate Chinese and Japanese input methods with complex mappings between Latin input and ideographic output.

Modern GUI toolkits like GTK and Qt ignore the X11 protocol almost entirely. The only drawing operation in use is “transfer this image and slap it over a rectangular area” (which isn’t even present in the original X11 protocol). Similarly, they pretty much ignore the X input method, favoring more modern scim and uim.

XCompose is probably the only useful part of the whole X11 stack. Unfortunately, native XCompose support is not present anywhere except the original X input method. Fortunately, both GTK and Qt allow changing their input method to XIM. Unfortunately, Sublime Text somehow ignored the X input method completely even when instructed to use it.

Sublime Text draws its own UI entirely to make it look nice on all the platforms. As such, on Linux it has three layers of indirection: first its own GUI toolkit, then GTK, which it uses to avoid dealing with the horror of X11, then X11 itself.

The Xlib interface for communicating with the input method is pretty simple: it’s just the XmbLookupString function. You would feed it the XPressedKeyEvents containing key codes that you receive from the X11 server, and it would give back a string, possibly empty, with the sequence of characters you need to insert in your text area. Also, in order to start communicating, you need to initialize an X input context corresponding to a particular X window. (An X window is what you’d call a window, but also what you’d call a widget—say, a button has its own X11 window.)

GTK packs the input method communication logic in the gtk_im_context_xim_filter_keypress function it has in its wrapper around the X input method. From there, it’s a pretty deep hole:

  • gtk_im_context_xim_filter_keypress uses a helper gtk_im_context_xim_get_ic to get the X input context, and if no context is returned, it resorts to a trivial US keymap;
  • gtk_im_context_xim_get_ic pulls the X input method handle and associated GTK settings from the ((GtkIMContextXIM *)context_xim)->im_info field;
  • which is initialized by the set_ic_client_window helper;
  • which refuses to initialize it if ((GtkIMContextXIM *)context_xim)->client_window is NULL;
  • which is called (through one more layer of indirection used by GTK to change the input methods on the fly) by Sublime Text itself;
  • which passes NULL as the client_window.

Now, why does that happen? Sublime Text calls gtk_im_context_set_client_window (the helper that eventually delegates to set_ic_client_window) in a snippet of code which looks roughly like this:

1
2
3
4
5
6
7
8
void sublimetext::gtk2::initialize() {
  // snip
  GtkWindow *window = gtk_window_new ();
  // a bit more initialization
  GtkIMContext *context = gtk_im_multicontext_new ();
  gtk_im_context_set_client_window(GTK_IM_CONTEXT(context), window->bin.container.widget.window);
  // snip
}

What is that window->bin.container.widget.window? It contains the GdkWindow of the GtkWindow; Sublime Text has to fetch it to pass to gtk_im_context_set_client_window, which wants a GdkWindow.

What is a GdkWindow? It’s a structure used by GTK to wrap X11 windows on Linux and other native structures on the rest of platforms. As such, if the GdkWindow and its underlying X11 window are not yet created, say, because these windows were yet never shown, the field would contain NULL. And since Sublime Text attempts to bind the IM context to the window immediately after creating the latter, this is exactly the bug which we observe.

It is worth noting that while no input methods that require the window to be know work, a simple GTK fallback that queries the system for the key configured as Compose key, but uses internally defined tables with commonly used sequences, does. This is why if you launch Sublime Text as GTK_IM_METHOD=whatever-really subl allows you to enter ° with <Multi_key> <o> <o>, but not customize it by changing any of the XCompose files.

Cooking the meat

How do we fix this? I started with a simple gdb script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Run as: $ GTK_IM_MODULE=xim gdb -script fix-xcompose-sublime-text-3061.gdb
file /opt/sublime_text/sublime_text
set follow-fork-mode child
set detach-on-fork off
run
inferior 2
set follow-fork-mode parent
set detach-on-fork on

b *0x5b3267
c
del 1
set $multicontext = (GtkIMMulticontext*) $r13
set $window = (GtkWindow*) $rbx

b gtk_widget_show if widget==$window
c
fin
del 2

call gtk_im_context_set_client_window($multicontext,$window->bin.container.widget.window)
detach inferiors 1 2
quit

On a high level, the script does four things:

  1. Sublime Text forks at startup, so the script has to do a little funny dance to attach gdb to the correct process.
  2. Then, it stops at the point in the initialization sequence where my Sublime Text build calls gtk_im_context_set_client_window, and captures the window and multicontext variables, which the compiler happened to leave around in spare registers.
  3. Then, it waits until GTK surely initializes a GdkWindow for the window GtkWindow.
  4. Then, it calls gtk_im_context_set_client_window again, exactly as Sublime Text would, but at the right time.

The script works. However, it is slow at startup and not very convenient in general. In particular, I would have to rewrite it every time Sublime Text updates. So, I opted for a better approach.

LD_PRELOAD (see also tutorial: 1, 2) is a convenient feature of Linux dynamic linker which allows to substitute some functions contained in a shared library with different functions contained in another shared library. This is how, for example, fakeroot performs its magic.

Initially I wanted to intercept gtk_window_new and gtk_im_multicontext_new to get the GtkIMMulticontext and the GtkWindow Sublime Text creates—they’re the first ever created—and then gtk_im_context_filter_keypress to call gtk_im_context_set_client_window before the first keypress is handled. But, somehow these calls were not intercepted by LD_PRELOAD; perhaps a weird way Sublime Text calls dlsym? I never figured it out.

So, eventually I settled on intercepting the initialization of the GTK XIM input method plugin (which is loaded by GTK itself and therefore can be intercepted easily) and replacing its filter_keypress handler with my own. A filter_keypress handler receives a GtkIMContext and a GdkEvent, which contains the pointer to GdkWindow, so that would give me all the information I need.

That worked.

Celebrating the game

Indeed, the goal was achieved in full. It only took me about ten hours, with practically no prior knowledge of libx11 or libgtk internals, access to Sublime Text source, or experience in reverse engineering.

But what was this for? I don’t think I ever needed to type ಠ_ಠ in Sublime Text.

I think I just like the sense of control over my tools.

April 13, 2014 10:06 PM

Free Electrons

Android training sessions in the UK

Free Electrons is happy to announce its first public training session outside of France.

British Android robot logo

Of course, we deliver training courses on customer sites all around the world, but this will be the first one open to individual registration that we organize outside of France.

We are starting with an Android system development session in Southampton, UK.

You will enjoy the newest version of our Android course, based on Android 4.x, and using the BeagleBone Black as the development platform for the practical labs. As always in our training sessions, participants walk away with the board used during the practical labs (in this case the BeagleBone Black and its LCD cape), allowing them to continue their learning and experiments well after the end of the course.

Being a popular cruising destination, Southampton is easy to reach from other cities in the UK and in the world.

The Android robot picture is copyrighted by Google. It is licensed under the Creative Commons 3.0 Attribution Unported license. The British robot version has been derived by Free Electrons, and is available under the same license. Feel free to reuse it and improve it as long as you keep the original author!

by Michael Opdenacker at April 13, 2014 06:15 AM

Andrew Zonenberg, Silicon Exposed

Getting my feet wet with invasive attacks, part 2: The attack

This is part 2 of a 2-part series. Part 1, Target Recon, is here.

Once I knew what all of the wires in the ZIA did, the next step was to plan an attack to read signals out.

I decapped an XC2C32A with concentrated sulfuric acid and soldered it to my dev board to verify that it was alive and kicking.

Simple CR-II dev board with integrated FTDI USB-JTAG
After testing I desoldered the sample and brought it up to campus to introduce it to some 30 keV Ga+ ions.

I figured that all of the exposed packaging would charge, so I'd need to coat the sample with something. I normally used sputtered Pt but this is almost impossible to remove after deposition so I decided to try evaporated carbon, which can be removed nicely with oxygen plasma among other things.

I suited up for the cleanroom and met David Frey, their resident SEM/FIB expert, in front of the Zeiss 1540 FIB system. He's a former Zeiss engineer who's very protective of his "baby" and since I had never used a FIB before there was no way he was going to let me touch his, so he did all of the work while I watched. (I don't really blame him... FIB chambers are pretty cramped and it's easy to cause expensive damage by smashing into something or other. Several SEMs I've used have had one detector or another go offline for repair after a more careless user broke something.)

The first step was to mill a hole through the 900 nm or so of silicon nitride overglass using the ion beam.

Newly added via, not yet filled
Once the via was drilled and it appeared we had made contact with the signal trace, it was time to backfill with platinum. The video below is sped up 10x to avoid boring my readers ;)


Metal deposition in a FIB is basically CVD: a precursor gas is injected into the chamber near the sample and it decomposes under the influence of beam-generated secondary electrons.

Once the via was filled we put down a large (20 μm square) square pad we could hit with an electrical probe needle.

Probe pad
Once everything was done and the chamber was vented I removed the carbon coating with oxygen plasma (the cleanroom's standard photoresist removal process), packaged up my sample, went home, and soldered it back to the board for testing. After powering it up... nothing! The device was as dead as a doornail, I couldn't even get a JTAG IDCODE from it.

I repeated the experiment a week or two later, this time soldering bare stub wires to the pins so I could test by plugging the chip into a breadboard directly. This failed as well, but watching my benchtop power supply gave me a critical piece of information: while VCCINT was consuming the expected power (essentially zero), VCCIO was leaking by upwards of 20 mA.

This ruled out beam-induced damage as I had not been hitting any of the I/O circuitry with the ion beam. Assuming that the carbon evaporation process was safe (it's used all the time on fragile samples, so this seemed a reasonably safe assumption for the time being), this left only the plasma clean as the potential failure point.

I realized what was going on almost instantly: the antenna effect. The bond wire and leadframe connected to each pad in the device was acting as an antenna and coupling some of the 13.56 MHz RF energy from the plasma into the input buffers, blowing out the ESD diodes and input transistors, and leaving me with a dead chip.

This left me with two possible ways to proceed: removing the coating by chemical means (a strong oxidizer could work), or not coating at all. I decided to try the latter since there were less steps to go wrong.

Somewhat surprisingly, the cleanroom staff had very limited experience working with circuit edits - almost all of their FIB work was process metrology and failure analysis rather than rework, so they usually coated the samples.

I decided to get trained on RPI's other FIB, the brand-new FEI Versa 3D. It's operated by the materials science staff, who are a bit less of the "helicopter parent" type and were actually willing to give me hands-on training.

FEI Versa 3D SEM/FIB
The Versa can do almost everything the older 1540 can do, in some cases better. Its one limitation is that it only has a single-channel gas injection system (platinum) while the 1540 is plumbed for platinum, tungsten, SiO2, and two gas-assisted etches.

After a training session I was ready to go in for an actual circuit edit.

FIB control panel
The Versa is the most modern piece of equipment I've used to date: it doesn't even have the classical joystick for moving the stage around. Almost everything is controlled by the mouse, although a USB-based knob panel for adjusting magnification, focus, and stigmators is still provided for those who prefer to turn something with their fingers.

Its other nice feature is the quad-image view which lets you simultaneously view an ion beam image, an e-beam image, the IR camera inside the chamber (very helpful for not crashing your sample into a $10,000 objective lens!), and a navigation camera which displays a top-down optical view of your sample.

The nav-cam has saved me a ton of time. On RPI's older JSM-6335 FESEM, the minimum magnification is fairly high so I find myself spending several minutes moving my sample around the chamber half-blind trying to get it under the beam. With the Versa's nav-cam I'm able to set up things right the first time.

I brought up both of the beams on the aluminum sample mounting stub, then blanked them to try a new idea: Move around the sample blind, using the nav-cam only, then take single images in freeze-frame mode with one beam or the other. By reducing the total energy delivered to the sample I hoped to minimize charging.

This strategy was a complete success, I had some (not too severe) charging from the e-beam but almost no visible charging in the I-beam.

The first sample I ran on the Versa was electrically functional afterwards, but the probe pad I deposited was too thin to make reliable contact with. (It was also an XC2C64A since I had run out of 32s). Although not a complete success, it did show that I had a working process for circuit edits.

After another batch of XC2C32As arrived, I went up to campus for another run. The signal of interest was FB2_5_FF: the flipflop for function block 2 macrocell 5. I chose this particular signal because it was the leftmost line in the second group from the left and thus easy to recognize without having to count lines in a bus.

The drilling went flawlessly, although it was a little tricky to tell whether I had gone all the way to the target wire or not in the SE view. Maybe I should start using the backscatter detector for this?

Via after drilling before backfill
I filled in the via and made sure to put down a big pile of Pt on the probe pad so as to not repeat my last mistake.

The final probe pad, SEM image
Seen optically, the new pad was a shiny white with surface topography and a few package fragments visible through it.

Probe pad at low mag, optical image
At higher magnification a few slightly damaged CMP filler dots can be seen above the pad. I like to use filler metal for focusing and stigmating the ion beam at milling currents before I move to the region of interest because it's made of the same material as my target, it's something I can safely destroy, and it's everywhere - it's hard to travel a significant distance on a modern IC without bumping into at least a few pieces of filler metal.

Probe pad at higher magnification, optical image. Note damaged CMP filler above pad.
I soldered the CPLD back onto the board and was relieved to find out that it still worked! The next step was to write some dummy code to test it out:

`timescale 1ns / 1ps
module test(clk_2048khz, led);

//Clock input
(* LOC = "P1" *) (* IOSTANDARD = "LVCMOS33" *)
input wire clk_2048khz;

//LED out
(* LOC = "P38" *) (* IOSTANDARD = "LVCMOS33" *)
output reg led = 0;

//Don't care where this is placed
reg[17:0] count = 0;
always @(posedge clk_2048khz)
count <= count + 1;

//Probe-able signal on FB2_5 FF at 2x the LED blink rate
(* LOC = "FB2_5" *) reg toggle_pending = 0;
always @(posedge clk_2048khz) begin
if(count == 0)
toggle_pending <= !toggle_pending;
end

//Blink the LED
always @(posedge clk_2048khz) begin
if(toggle_pending && (count == 0))
led <= !led;
end

endmodule


This is a 20-bit counter that blinks a LED at ~2 Hz from a 2048 KHz clock on the board. The second-to-last stage of the counter (so ~4 Hz) is constrained to FB2_5, the signal we're probing.

After making sure things still worked I attached the board's plastic standoffs to a 4" scrap silicon wafer with Gorilla Glue to give me a nice solid surface I could put on the prober's vacuum chuck.

Test board on 4" wafer
Earlier today I went back to the cleanroom. After dealing with a few annoyances (for example, the prober with a wide range of Z axis travel, necessary for this test, was plugged into the electrical test station with curve tracing capability but no oscilloscope card) I landed a probe on the bond pad for VCCIO and one on ground to sanity check things. 3.3V... looks good.

Moving carefully, I lifted the probe up from the 3.3V bond pad and landed it on my newly added probe pad.

Landing a probe on my pad. Note speck of dirt and bent tip left by previous user. Maybe he poked himself mounting the probe?
It took a little bit of tinkering with the test unit to figure out where all of the trigger settings were, but I finally saw a ~1.8V, 4 Hz squarewave. Success!

Waveform sniffed from my probe pad
There's still a bit of tweaking needed before I can demo it to my students (among other things, the oscilloscope subsystem on the tester insists on trying to use the 100V input range, so I only have a few bits of ADC precision left to read my 1.8V waveform) but overall the attack was a success.

by Andrew Zonenberg (noreply@blogger.com) at April 13, 2014 12:54 AM

April 12, 2014

ZeptoBARS

Phillips PCF8574 - 8-bit I2C port expander : weekend die-shot

Phillips PCF8574 is 8-bit I2C port expander, 3µm manufacturing technology.

April 12, 2014 06:50 PM

April 08, 2014

ZeptoBARS

Fake audiophile opamps: OPA627 (AD744?!)

Walking around ebay I noticed insanely cheap OPA627's. It's rather old, popular and high-quality opamps, often used in audiophile gear. Manufacturer (Texas Instruments / Burr Brown) sells them 16-80$ each (depending on package & options) while on ebay it's cost was 2.7$, shipping included.

Obviously, something fishy was going on. I ordered one, and for comparison - older one in metal can package, apparently desoldered from some equipment. Let's see if there is any difference.



Plastic one was dissolved in acid, metal can was easily cut:


Comparison

Genuine TI/BB OPA627 chip first. We can see here at least 4 laser-trimmed resistors - now we see why it could cost that much. Laser trimmed resistors are needed here due to unavoidable manufacturing variation - parts inside opamps needs to be balanced perfectly.


"Chinese" 2.7$ chip. There is only 1 laser trimmed resistor, but we also notice markings AD (Analog Devices?) and B744. Is it really AD744? If we check datasheet на AD744 - we'll see that metal photo perfectly matches one in the datasheet .


What happened here?

Some manufacturer in China put an effort to find cheaper substitute for OPA627 - it appeared to be AD744. AD744 has similar speed (500ns to 0.01%), similar type (*FET), supports external offset compensation. AD744 also support external frequency compensation (for high-speed high-gain applications) but there was no corresponding pin on the OPA627 - so this feature is unused.

On the other hand AD744 has higher noise (3x) and higher offset voltage (0.5mV vs 0.1mV).

So they bought AD744 in the form of dies or wafers, packaged them and marked as OPA627. It does not seems they earned alot of money here - it's more of an economic sabotage. Good thing that they did not used something like LM358 - in that case it would have been much easier to notice the difference without looking inside...

Be careful when choosing suppliers - otherwise your design might get "cost-optimized" for you :-)

PS. Take a look at our previous story about fake FT232RL.

April 08, 2014 06:38 AM

April 07, 2014

Free Electrons

Free Electrons welcomes Boris Brezillon and Antoine Ténart

Boris Brezillon
Antoine Ténart

We are happy to announce that our engineering team has recently welcomed two new embedded Linux engineers: Boris Brezillon and Antoine Ténart. Boris and Antoine will both be working from the Toulouse office of the company, together with Maxime Ripard and Thomas Petazzoni. They will be helping Free Electrons to address the increasing demand for its development and training services.

Antoine started his professional experience with Embedded Linux and Android in 2011. Before joining Free Electrons in 2014, he started with low level Android system development at Archos (France), and worked on Embedded Linux and Android projects at Adeneo Embedded (France). He joined Free Electrons early March, and has already been involved in kernel contributions on the Marvell Berlin processors and the Atmel AT91 processors, and is also working on our upcoming Yocto training course.

Boris joined Free Electrons on April, 1st, and brings a significant embedded Linux experience that he gained while working on home automation devices at Overkiz (France). He was maintaining a custom distribution built with the Yocto. Boris also has already contributed many patches to the mainline Linux kernel sources, in particular for the Atmel AT91 ARM SoCs. Boris is also developing the NAND controller driver for the Allwinner ARM processors and has proposed improvements to the core Linux MTD subsystem (see this thread and this other thread).

by Thomas Petazzoni at April 07, 2014 08:42 PM

Linux 3.14 released, Free Electrons contributions inside!

Linus Torvalds has just released the 3.14 version of the Linux kernel. As usual, it incorporates a large number of changes, for which a good summary is available on the KernelNewbies site.

This time around, Free Electrons is the 21st company contributing to this kernel release, by number of patches, right between Cisco and Renesas. Six of our engineers have contributed to this release: Maxime Ripard, Alexandre Belloni, Ezequiel Garcia, Grégory Clement, Michael Opdenacker and Thomas Petazzoni. In total, they have contributed 121 patches to this kernel release.

  • By far, the largest number of patches are related to the addition of NAND support for the Armada 370 and Armada XP processors. This required a significant effort, done by Ezequiel Garcia, to re-use the existing pxa3xx_nand driver and extend it to cover the specificities of the Armada 370/XP NAND controller. And these specificities were quite complicated, involving a large number of changes to the driver, which all had to also be validated on existing PXA3xx hardware to not introduce any regression.
  • Support for high speed timers on various Allwinner SOCs has been added by Maxime Ripard.
  • Support for the Allwinner reset controller has been implemented by Maxime Ripard.
  • SMP support for the Allwinner A31 SOC was added by Maxime Ripard.
  • A number of small fixes and improvements were made to the AT91 pinctrl driver and the pinctrl subsystem by Alexandre Belloni.
  • Michael Opdenacker continued his quest to finally get rid of the IRQF_DISABLED flag.
  • A number of fixes and improvements were made by Grégory Clement and Thomas Petazzoni on various Armada 370/XP drivers: fix for the I2C controller on certain early Armada XP revisions, fixes to make the Armada 370/XP network driver usable as a module, etc.

In detail, our contributions were:

by Thomas Petazzoni at April 07, 2014 08:25 PM

Video Circuits

Joy to the World by William Laziza (1994)

Recovered by the XFR STN projectJoy to the World is Visual Music designed for ambient presentation. Joy to the World combines, optical image processing, Amiga graphics and recursive video imagery with synthesized sound. What is unique about this piece is that the audio is used to create the visuals is also the sound track. This work was created at the Micro Museum.

https://archive.org/details/XFR_2013-08-11_1A_01




by Chris (noreply@blogger.com) at April 07, 2014 11:48 AM

April 06, 2014

Altus Metrum

keithp&#x27;s rocket blog: Java-Sound-on-Linux

Java Sound on Linux

I’m often in the position of having my favorite Java program (AltosUI) unable to make any sounds. Here’s a history of the various adventures I’ve had.

Java and PulseAudio ALSA support

When we started playing with Java a few years ago, we discovered that if PulseAudio were enabled, Java wouldn’t make any sound. Presumably, that was because the ALSA emulation layer offered by PulseAudio wasn’t capable of supporting Java.

The fix for that was to make sure pulseaudio would never run. That’s harder than it seems; pulseaudio is like the living dead; rising from the grave every time you kill it. As it’s nearly impossible to install any desktop applications without gaining a bogus dependency on pulseaudio, the solution that works best is to make sure dpkg never manages to actually install the program with dpkg-divert:

# dpkg-divert --rename /usr/bin/pulseaudio

With this in place, Java was a happy camper for a long time.

Java and PulseAudio Native support

More recently, Java has apparently gained some native PulseAudio support in some fashion. Of course, I couldn’t actually get it to work, even after running the PulseAudio daemon but some kind Debian developer decided that sound should be broken by default for all Java applications and selected the PulseAudio back-end in the Java audio configuration file.

Fixing that involved learning about said Java audio configuration file and then applying patch to revert the Debian packaging damage.

$ cat /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/sound.properties
...
#javax.sound.sampled.Clip=org.classpath.icedtea.pulseaudio.PulseAudioMixerProvider
#javax.sound.sampled.Port=org.classpath.icedtea.pulseaudio.PulseAudioMixerProvider
#javax.sound.sampled.SourceDataLine=org.classpath.icedtea.pulseaudio.PulseAudioMixerProvider
#javax.sound.sampled.TargetDataLine=org.classpath.icedtea.pulseaudio.PulseAudioMixerProvider

javax.sound.sampled.Clip=com.sun.media.sound.DirectAudioDeviceProvider
javax.sound.sampled.Port=com.sun.media.sound.PortMixerProvider
javax.sound.sampled.SourceDataLine=com.sun.media.sound.DirectAudioDeviceProvider
javax.sound.sampled.TargetDataLine=com.sun.media.sound.DirectAudioDeviceProvider

You can see the PulseAudio mistakes at the top of that listing, with the corrected native interface settings at the bottom.

Java and single-open ALSA drivers

It used to be that ALSA drivers could support multiple applications having the device open at the same time. Those with hardware mixing would use that to merge the streams together; those without hardware mixing might do that in the kernel itself. While the latter is probably not a great plan, it did make ALSA a lot more friendly to users.

My new laptop is not friendly, and returns EBUSY when you try to open the PCM device more than once.

After downloading the jdk and alsa library sources, I figured out that Java was trying to open the PCM device multiple times when using the standard Java sound API in the simplest possible way. I thought I was going to have to fix Java, when I figured out that ALSA provides user-space mixing with the ‘dmix’ plugin. I enabled that on my machine and now all was well.

$ cat /etc/asound.conf
pcm.!default {
    type plug
    slave.pcm "dmixer"
}

pcm.dmixer  {
    type dmix
    ipc_key 1024
    slave {
        pcm "hw:1,0"
        period_time 0
        period_size 1024
        buffer_size 4096
        rate 44100
    }
    bindings {
        0 0
        1 1
    }
}

ctl.dmixer {
    type hw
    card 1
}

ctl.!default {
    type hw
    card 1
}

As you can see, my sound card is not number 0, it’s number 1, so if your card is a different number, you’ll have to adapt as necessary.

April 06, 2014 05:30 AM

Video Circuits

Photoacoustics

Photoacoustics as a word is now used in association with various methods of studying electromagnetic activity via acoustic detection in medical and scientific contexts. Originally Alexander Graham Bell & Charles Sumner Tainter discovered the ability to modulate a light source using sound and inversely modulate a sound producing membrane using light when working on their Photophone optical telecommunications system. This line of thinking starting around 1880 with the Photophones invention and continuing right up to the 1920s, eventually made possible inventions like optical sound on film (with all these technologies being indebted to the even earlier discoveries of the photoelectric properties of materials such as selenium). Sound on film interests me a lot in both it's exploitation in the early 20th century by artists and purely for it's interesting technological development. I have gathered a lot of information about the creative use of sound on film but I also became interested to find evidence of still images that recorded sound (also see earlier post on the eidophone) so anyway the first two images are I believe of Bell's experiments from a really cool blog on photography  Homemade Camera 












































Second are some images produced by Robert W. Wood using single wave fronts
 produced by sparks, the latter image is a diagram based on the first, I believe.
































Last up is the Phonodeik an instrument designed by Dayton Miller
that allows the photography over time of complex sound waves. It
reminds me very much of the earlier Phonautograph but with a
photographic output.














http://en.wikipedia.org/wiki/Photophone
http://en.wikipedia.org/wiki/Sound-on-film
http://en.wikipedia.org/wiki/Optical_sound
http://homemadecamera.blogspot.co.uk/2007/08/photoacoustics.html

http://en.wikipedia.org/wiki/Robert_W._Wood
http://en.wikipedia.org/wiki/Schlieren_photography

http://cultureandcommunication.org/deadmedia/index.php/Phonodeik
http://en.wikipedia.org/wiki/Phonodeik
http://www.phys.cwru.edu/ccpi/Phonodeik.html
http://courtneyjl.wordpress.com/
http://dssmhi1.fas.harvard.edu/

I have allot more stuff to include on this subject including cymatics and optical sound experiments which I left out to cut down the size of this post, but if you have any interesting links I always welcome tips in the comments

by Chris (noreply@blogger.com) at April 06, 2014 03:49 AM

April 05, 2014

Peter Zotov, whitequark

Page caching with Nginx

For Amplifr, I needed a simple page caching solution, which would work with multiple backend servers and require minimal amount of hassle. It turns out that just Nginx (1.5.7 or newer) is enough.

First, you need to configure your backend. This consists of emitting a correct Cache-Control header and properly responding to conditional GET requests with If-Modified-Since header.

Amplifr currently emits Cache-Control: public, max-age=1, must-revalidate for cacheable pages. Let’s take a closer look:

  • public means that the page has no elements specific to the particular user, so the cache can send the cache content to several users.
  • max-age=1 means that the content can be cached for one second. As will be explained later, max-age=0 would be more appropriate, but that directive would prevent the page from being cached.
  • must-revalidate means that after the cached content has expired, the cache must not respond with cached content unless it has forwarded the request further and got 304 Not Modified back.

This can be implemented in Rails with a before_filter:

1
2
3
4
5
6
7
8
9
10
class FooController < ApplicationController
  before_filter :check_cache

  private
  def check_cache
    response.headers['Cache-Control'] = 'public, max-age=1, must-revalidate'
    # `stale?' renders a 304 response, thus halting the filter chain, automatically.
    stale?(last_modified: @current_site.updated_at)
  end
end

Now, we need to make Nginx work like a public cache:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
http {
  # ...
  proxy_cache_path /var/cache/nginx/foo levels=1:2 keys_zone=foocache:5m max_size=100m;

  server {
    # ...

      proxy_pass              http://foobackend;
      proxy_cache             foocache;
      proxy_cache_key         "$host$request_uri";
      proxy_cache_revalidate  on;
      # Optionally;
      # proxy_cache_use_stale error timeout invalid_header updating
                              http_500 http_502 http_503 http_504;
    }
  }
}

The key part is the proxy_cache_revalidate setting. Let’s take a look at the entire flow:

  • User agent A performs GET /foo HTTP/1.1 against Nginx.
  • Nginx has a cache miss and performs GET /foo HTTP/1.0 against the backend.
  • Backend generates the page and returns 200 OK.
  • Nginx detects that Cache-Control permits it to cache the response for 1 second, caches it and returns the response to user agent A.
  • (time passes…)
  • User agent B performs GET /foo HTTP/1.1 against Nginx.
  • Nginx has a cache hit (unless the entry was evicted), but the entry has already expired. Instructed by proxy_cache_revalidate, it issues GET /foo HTTP/1.0 against the backend and includes an If-Modified-Since header.
  • Backend checks the timestamp in If-Modified-Since and detects that Nginx’s cache entry is not actually stale, returning 304 Not Modified. It doesn’t spend any time generating content.
  • Nginx sets the expiration time on cache entry to 1 second from now and returns the cached response to the user agent B.

Some notes on this design:

  1. Technically, performing a conditional GET requires sending an HTTP/1.1 request, but Nginx is only able to talk HTTP/1.0 to the backends. This doesn’t seem to be a problem in practice.
  2. Ideally, specifying max-age=0 in Cache-Control would instruct the cache to store and always revalidate the response, but Nginx doesn’t cache it at all instead. HTTP specification permits both behaviors.
  3. You can specify proxy_cache_use_stale directive, so that if the server crashes or becomes unresponsive, Nginx would still serve some cached content. If the frontpage is static, it’s a good way to ensure it will be accessible at all times.

April 05, 2014 09:25 AM

Video Circuits

Magnetic Tape

Magnetic Tape is interesting to me. On reels or in cassettes each recording (or potential recording) is like a little curly drawing that pulls the sound through space, Only one position on the tape is read and so the linear nature of the tape allows the signal to vary the output is attached to over time. I messed around with wire recorders along time ago because I liked the fact that the sound is concentrated into a tiny line like space with the heaviness of the mark being replaced by the amplitude of the waveforms encoded as magnetic information. Here are some of my diy wall mounted ones, winding the pickup heads was a long day.

















I also like Nam June Paik’s 1963 work Random Access allot, tape is attached to the wall as a drawing with the playback head made available as a mobile stylus so you can retrace his steps and listen to the recordings using the same gestures he used to stick them down. A kind of playable graphical notation

















A good friend Dale has gone way further with visual tape based work and kindly sent me some photos of slightly insane pieces he is putting together at the moment. He selects tape based on it's visual tonality and creates geometric slightly illusory patterns building a second information set encoded in the the recording medium. I don't know if Dale does, but I find these relate to visual music and graphic notation practices too. Ill probably try to convince him at his art show at here in london on the 10th of April at six, come if you want to hang out we will probably drink beer after too.













































































another interesting artist I found using tape in a slightly different
way is Terence Hannum, I like the areas of ground left visible. pretty black!























There are loads of other examples I'm sure I would really like to find a graphic score where the composer has stuck down tape, creating a kind of instrument,notation recording in one, it must have been done. If only VHS was as easy to read without a moving head, pixelvision cameras hacked might be the only answer!

http://www.dalealexanderwilson.com/
http://en.wikipedia.org/wiki/Wire_recording
http://en.wikipedia.org/wiki/Tape_recorder
http://www.medienkunstnetz.de/works/random-access/images/3/
http://www.guggenheim.org/new-york/collections/collection-online/artwork/9536
http://arts.brighton.ac.uk/study/fine-art/fine-art-performance/case-study/analogue-tape-glove
http://anatlas.wordpress.com/2013/10/11/52nd-william-anastasi/comment-page-1/
http://www.terencehannum.com/
http://rhizome.org/editorial/2008/jun/26/less-lossy-more-glossy/?ref=archive_post_title


by Chris (noreply@blogger.com) at April 05, 2014 09:05 AM

April 04, 2014

Moxie Processor

Sign Extension

Moxie zero-extends all 8 and 16-bit loads from memory. Until recently, however, the GCC port didn’t understand how loads worked, and would always shift loaded values back and forth to either empty out the upper bits or sign-extend the loaded value. While correct, it was overly bloated. If we’re loading an unsigned char into a register, there’s no need to force the upper bits to clear. The hardware does this for us.

For instance, this simple C code….

..would compile to…

Thanks to help from hackers on the GCC mailing list, I was finally able to teach the compiler how to treat memory loads correctly. This led to two changes…

  1. The introduction of 8 and 16-bit sign extension instructions (sex.b and sex.s). Sometimes we really do need to sign-extend values, and logical shift left followed by arithmetic shift right is a pretty expensive way to do this on moxie.
  2. The char type is now unsigned by default. If you have zero-extending 8-bit loads then you had better make your char type unsigned, otherwise your compiler output will be littered with sign extension instructions.

Now for the C code above, we get this nice output….

I believe that this was the last major code quality issue from the GCC port, and the compiler output should be pretty good now

I’ve updated the upstream GCC, binutils and gdb (sim) repositories, my QEMU fork in github, as well as the MoxieLite VHDL core in the moxie-cores git repo.

by green at April 04, 2014 08:40 AM

April 02, 2014

Bunnie Studios

Crowdfunding the Novena Open Laptop

We’re launching a crowdfunding campaign around our Novena open hardware computing platform. Originally, this started as a hobby project to build a computer just for me and xobs – something that we would use every day, easy to extend and to mod, our very own Swiss Army knife. I’ve posted here a couple of times about our experience building it, and it got a lot of interest. So by popular demand, we’ve prepared a crowdfunding offering and you can finally be a backer.



Background



Novena is a 1.2GHz, Freescale quad-core ARM architecture computer closely coupled with a Xilinx FPGA. It’s designed for users who want to modify and extend their hardware: all the documentation for the PCBs are open and free to download, and it comes with a variety of features that facilitate rapid prototyping.

We are offering four variations, and at the conclusion of the Crowd Supply campaign on May 18, all the prices listed below will go up by 10%:

  • “Just the board” ($500): For crafty people who want to build their case and define their own style, we’ll deliver to you the main PCBA, stuffed with 4GiB of RAM, 4GiB microSD card, and an Ath9k-based PCIe wifi card. Boots to a Debian desktop over HDMI.
  • “All-in-One Desktop” ($1195): Plug in your favorite keyboard and mouse, and you’re ready to go; perfect for labs and workbenches. You get the circuit board above, inside a hacker-friendly case with a Full HD (1920×1080) IPS LCD.
  • “Laptop” ($1995): For hackers on the go, we’ll send you the same case and board as above, but with battery controller board, 240 GiB SSD, and a user-installed battery. As everyone has their own keyboard preference, no keyboard is included.
  • “Heirloom Laptop” ($5000): A show stopper of beauty; a sure conversation piece. This will be the same board, battery, and SSD as above, but in a gorgeous, hand-crafted wood and aluminum case made by Kurt Mottweiler in Portland, Oregon. As it’s a clamshell design, it’s also the only offering that comes with a predetermined keyboard.

All configurations will come with Debian (GNU/Linux) pre-installed, but of course you can build and install whatever distro you prefer!

Novena Gen-2 Case Design

Followers of this blog may have seen a post featuring a prototype case design we put together last December. These were hand-built cases made from aluminum and leather and meant to validate the laptop use case. The design was rough and crafted by my clumsy hands – dubbed “gloriously fuggly [sic]” – yet the public response was overwhelmingly positive. It gave us confidence to proceed with a 2nd generation case design that we are now unveiling today.



The first thing you’ll notice about the design is that the screen opens “the wrong way”. This feature allows the computer to be usable as a wall-hanging unit when the screen is closed. It also solves a major problem I had with the original clamshell prototype – it was a real pain to access the hardware for hacking, as it’s blocked by the keyboard mounting plate.

Now, with the slide of a latch, the screen automatically pops open thanks to an internal gas spring. This isn’t just an open laptop — it’s a self-opening laptop! The internals are intentionally naked in this mode for easy access; it also makes it clear that this is not a computer for casual home use. Another side benefit of this design is there’s no fan noise – when the screen is up, the motherboard is exposed to open air and a passive heatsink is all you need to keep the CPU cool.

Another feature of this design is the LCD bezel is made out of a single, simple aluminum sheet. This allows users with access to a minimal machine shop to modify or craft their own bezels – no custom tooling required. Hopefully this makes adding knobs and connectors, or changing the LCD relatively easy. In order to encourage people to experiment, we will ship desktop and laptop devices with not one, but two LCD bezels, so you don’t have to worry about having an unusable machine if you mess up one of the bezels!

The panel covering the “port farm” on the right hand side of the case is designed to be replaceable. A single screw holds it in place, so if you design your own motherboard or if you want to upgrade in the future, you’re not locked into today’s port layout. We take advantage of this feature between the desktop and the laptop versions, as the DC power jack is in a different location for the two configurations.

Finally, the inside of the case features a “Peek Array”. It’s an array of M2.5 mounting holes (yes, they are metric) populating the extra unused space inside the case, on the right hand side in the photo above. It’s named after Nadya Peek, a graduate student at MIT’s Center for Bits and Atoms. Nadya is a consummate maker, and is a driving force behind the CBA’s Fab Lab initiative. When I designed this array of mounting bosses, I imagined someone like Nadya making their own circuit boards or whatever they want, and mounting it inside the case using the Peek Array.

The first thing I used the Peek Array for is the speaker box. I desire loud but good quality sound out of my laptop, so I 3D printed a speakerbox that uses 36mm mini-monitor drivers, and mounted it inside using the Peek Array. I would be totally stoked if a user with real audio design experience was to come up with and share a proper tuned-port design that I could install in my laptop. However, other users with weight, space or power concerns can just as easily design and install a more modest speaker.

I started the Gen-2 case design in early February, after xobs and I finally decided it was time to launch a crowdfunding campaign. With a bit of elbow grease and the help of a hard working team of engineers and project managers at my contract manufacturing partner, AQS (that’s Celia and Chemmy pictured above, doing an initial PCBA fitting two weeks ago), I was able to bring a working prototype to San Jose and use it to give my keynote at EELive today.

The Heirloom Design (Limited Quantities)

One of the great things about open hardware is it’s easier to set up design collaborations – you can sling designs and prototypes around without need for NDAs or cumbersome legal agreements. As part of this crowdfunding campaign, I wanted to offer a really outstanding, no-holds barred laptop case – something you would be proud to have for years, and perhaps even pass on to your children as an heirloom. So, we enlisted the help of Kurt Mottweiler to build an “heirloom laptop”. Kurt is a designer-craftsman situated in Portland, Oregon and drawing on his background in luthiery, builds bespoke cameras of outstanding quality from materials such as wood and aluminum. We’re proud to have this offering as part of our campaign.

For the prototype case, Kurt is featuring rift-sawn white oak and bead-blasted-and-anodized 6061 aluminum. He developed a composite consisting of outer layers of paper backed wood veneer over a high-density cork core with intervening layers of 5.5 ounce fiberglass cloth, all bonded with a high modulus epoxy resin. This composite is then gracefully formed into semi-monocoque curves, giving a final wavy shape that is both light, stiff, and considers the need for air cooling.

The overall architecture of Kurt’s case mimics the industry-standard clamshell notebook design, but with a twist. The keyboard used within the case is wireless, and can be easily removed to reveal the hardware within. This laptop is an outstanding blend of tasteful design, craftsmanship, and open hardware. And, to wit, since these are truly hand-crafted units, no two units will be exactly alike – each unit will have its own grain and a character that reflects Kurt’s judgment for that particular piece of wood.

How You can Help

For the crowdfunding campaign to succeed, xobs and I need a couple hundred open source enthusiasts to back the desktop or standard laptop offering.

And that underlies the biggest challenge for this campaign – how do we offer something so custom and so complex at a price that is comparable to a consumer version, in low volumes? Our minimum funding goal of $250,000 is a tiny fraction of what’s typically required to recover the million-plus dollar investment behind the development and manufacture of a conventional laptop.

We meet this challenge with a combination of unique design, know-how, and strong relationships with our supply chain. The design is optimized to reduce the amount of expensive tooling required, while still preserving our primary goal of being easy to hack and modify. We’ve spent the last year and a half poring over three revisions of the PCBA, so we have high confidence that this complex design will be functional and producible. We’re not looking to recover that R&D cost in the campaign – that’s a sunk cost, as anyone is free to download the source and benefit from our thoroughly vetted design today. We also optimized certain tricky components, such as the LCD and the internal display port adapter, for reliable sourcing at low volumes. Finally, I spent the last couple of months traveling the world, lining up a supply chain that we feel confident can deliver this design, even in low volume, at a price comparable to other premium laptop products.

To be clear, this is not a machine for the faint of heart. It’s an open source project, which means part of the joy – and frustration – of the device is that it is continuously improving. This will be perhaps the only laptop that ships with a screwdriver; you’ll be required to install the battery yourself, screw on the LCD bezel of your choice, and you’ll get the speakers as a kit, so you don’t have to use our speaker box design – if you have access to a 3D printer, you can make and fine tune your own speaker box.

If you’re as excited about having a hackable, open laptop as we are, please back our crowdfunding campaign at Crowd Supply, and follow @novenakosagi for real-time updates.

by bunnie at April 02, 2014 03:58 PM

Free Electrons

Embedded Linux Conference 2014, Free Electrons participation

San JoséOne of the most important conference of the Embedded Linux community will take place at the end of this month in California: the Embedded Linux Conference will be held in San Jose from April, 29th to May, 1st, co-located with the Android Builders Summit. The schedule for both of these events has been published, and it is full of interesting talks on a wide range of embedded topics.

As usual, Free Electrons will participate to this conference, but this participation will be the most important ever:

If you are interested in embedded Linux, we highly advise you to attend this conference. And if you are interested in business or recruiting opportunities with Free Electrons, it will also be the perfect time to meet us!

by Thomas Petazzoni at April 02, 2014 02:39 PM

March 31, 2014

Andrew Zonenberg, Silicon Exposed

Getting my feet wet with invasive attacks, part 1: Target recon

This is part 1 of a 2-part series. Part 2, The Attack, is here.

One of the reasons I've gone a bit dark lately is that running CSCI 6974, RPI's experimental hardware reverse engineering class, has been eating up a lot of my time.

I wanted to make the final lab for the course a nice climax to the semester and do something that would show off the kinds of things that are possible if you have the right gear, so it had to be impressive and technically challenging. The obvious choice was a FIB circuit edit combined with invasive microprobing.

After slaving away for quite a while (this was started back in January or so) I've managed to get something ready to show off :) The work described here will be demonstrated in front of my students next week as part of the fourth lab for the class.

The first step was to pick a target. I was interested in the Xilinx XC2C32A for several reasons and was already using other parts of the chip as a teaching subject for the class. It's a pure-digital CMOS CPLD (no analog sense amps and a fairly regular structure) made on a relatively modern process (180 nm 4-metal UMC) but not so modern as to be insanely hard to work with. It was also quite cheap ($1.25 a pop for the slowest speed grade in VQG44 package on DigiKey) so I could afford to kill plenty of them during testing

The next step was to decap a few, label interesting pins, and draw up a die floorplan. Here's a view of the die at the implant layer after Dash etch; P-type doping shows up as brown. (John did all of the staining work and got great results. Thanks!)

XC2C32A die floorplan after Dash etch
The bottom half of the die is support infrastructure with EEPROM banks for storing the configuration bitstream toward the center and JTAG/configuration stuff in a U-shape below and to either side of the memory array. (The EEPROM is mislabeled "flash" in this image because I originally assumed it was 1T NOR flash. Higher magnification imaging later showed this to be wrong; the bit cells are 2T EEPROM.)

The top half of the die is the actual programmable logic, laid out in a "butterfly" structure. The center spine is the ZIA (global routing, also referred to as the AIM in some datasheets), which takes signals from the 32 macrocell flipflops and 33 GPIO pins and routes them into the function blocks. To either side of the spine are the two FBs, which consist of an 80 x 56 AND array (simplifying a bit... the actual structure is more like 2 blocks x 20 rows x 2 interleaved cells x 56 columns), a 56 x 16 OR array, and 16 macrocells.

I wanted some interesting data to show my students so there were two obvious choices. First, I could try to defeat the code protection somehow and read bitstreams out of a locked device via JTAG. Second, I could try to read internal device state at run time. The second seemed a bit easier so I decided to run with it (although defeating the lock bits is still on my longer-term TODO.)

The obvious target for probing internal runtime state is the ZIA, since all GPIO inputs and flipflop states have to go through here. Unfortunately, it's almost completely undocumented! Here's the sum total of what DS090 has to say about it (pages 5-6):
The Advanced Interconnect Matrix is a highly connected low power rapid switch. The AIM is directed by the software to deliver up to a set of 40 signals to each FB for the creation of logic. Results from all FB macrocells, as well as, all pin inputs circulate back through the AIM for additional connection available to all other FBs as dictated by the design software. The AIM minimizes both propagation delay and power as it makes attachments to the various FBs.
Thanks for the tidbit, Xilinx, but this really isn't gonna cut it. I need more info!

The basic ZIA structure was pretty obvious from inspection of the implant layer: 20 identical copies of the same logic. This suggested that each row was responsible for feeding two signals left and two right.

SEM imaging of the implant layer showed the basic structure to be largely, but not entirely, symmetric about the left-right axis. At the far outside a few cells of the PLA AND array can be seen. Moving toward the center is what appears to be a 3-stage buffer, presumably for driving the row's output into the PLA. The actual routing logic is at center.

The row appeared entirely symmetric top-to-bottom so I focused my future analysis on the upper half.

Single row of the ZIA seen at the implant layer after Dash etch. Light gray is P-type doping, medium gray is N-type doping, dark gray is STI trenches.
Looking at the top metal layer revealed the expected 65 signals.

Single row of the ZIA seen on metal 4
The signals were grouped into six groups with 11, 11, 11, 11, 11, and 10 signals in them. This led me to suspect that there was some kind of six-fold structure to the underlying circuitry, a suspicion which was later proven correct.

Inspection of the configuration EEPROM for the ZIA showed it to be 16 bits wide by 48 rows high.

ZIA configuration EEPROM (top few rows)
Since the global configuration area in the middle of the chip was 8 rows high this suggested that each of the 40 remaining EEPROM rows configured the top or bottom half of a ZIA row.

Of the 16 bits in each row, 8 bits presumably controlled the left-hand output and 8 controlled the right. This didn't make a lot of sense at first: dense binary coding would require only 7 bits for 65 channels and one-hot coding would need 65 bits.

Reading documentation for related device families sometimes helps to shed some light on how a part was designed, so I took a look at some of the whitepapers for the older 350 nm CoolRunner XPLA3 series. They went into some detail on how full crossbar routing was wasteful of chip area and often not necessary to get sufficient routability. You don't need to be able to generate every 40! permutations of a given subset of signals as long as you can route every signal somehow. Instead, the XPLA3's designers connected only a handful of the inputs to each row and varied the input selection for each row so as to allow almost every possible subset to be selected somehow.

This suggested a 2-level hierarchy to the ZIA mux. Instead of being a 65:1 mux it was a 65:N hard-wired mux followed by a N:1 programmable mux feeding left and another N:1 feeding right. 6 seemed to be a reasonable guess for N, given the six groups of wires on metal 4.

ZIA mux structure
This hypothesis was quickly confirmed by looking at M3 and M3-M4 vias: Each row had six short wires on M3, one under each of the six groups of wires in the bus. Each of these short lines was connected by one via to one of the bus lines on M4. The via pattern varied from row to row as expected.

ZIA M3-M4 vias

I extracted the full via pattern by copying a tracing of M4 over the M3 image and using the power vias running down the left side as registration marks. (Pro tip: Using a high accelerating voltage, like 20 kV, in a SEM gives great results on aluminum processes with tungsten via plugs. You get backscatters from vias through the metal layer that you can use for aligning image stacks.) A few of the rows are shown above.

At this point I felt I understood most of the structure so the next step was full circuit extraction! I had John CMP a die down to each layer and send to me for high-res imaging in the SEM.

The output buffers were fairly easy. As I expected they were just a 3-stage inverter cascade.

Output buffer poly/diffusion/contact tracing

Output buffer M1 tracing
Output buffer gate-level schematic

Individual cell schematics
Nothing interesting was present on any of the upper layers above here, just power distribution.

The one surprising thing about the output buffer was that the NMOS on the third stage had a substantially wider channel than the PMOS. This is probably something to do with optimizing output rise/fall times.

Looking at the actual mux logic showed that it was mostly tiles of the same basic pattern (a 6T SRAM cell, a 2-input NOR gate, and a large multi-fingered NMOS pass transistor) except for the far left side.

Gate-level layout of mux area

Left side of mux area, gate-level layout
The same SRAM-feeding-NOR2 structure is seen, but this time the output is a small NMOS or PMOS pass transistor.

After tracing M1, it became obvious what was going on.

Left side of mux area, M1

The upper and lower halves control the outputs to function blocks 1 and 2 respectively. The two SRAM bits allow each output (labeled MUXOUT_FBx) to be pulled high, low, or float. A global reset line of some sort, labeled OGATE, is used to gate all logic in the entire ZIA (and presumably the rest of the chip); when OGATE is high the SRAM bits are ignored and the output is forced high.

Here's what it looks like in schematic:

Gate-level schematics of pullup/pulldown logic
Cell schematics
In the schematics I drew the NOR2v0x1 cell as its de Morgan dual (AND with inverted inputs) since this seemed to make more sense in the context of the circuit: the output is turned on when the active-low input is low and OGATE is turned off.

It's interesting to note that while almost all of the config bits in the circuit are active-low, PULLUP is active-high. This is presumably done to allow the all-ones state (a blank EEPROM array) to put the muxes in a well-defined state rather than floating.

Turning our attention to the rest of the mux array shows a 6:1 one-hot-coded mux made from NMOS pass transistors. This, combined with the 2 bits needed for the pull-high/pull-low module, adds up to the expected 8.  The same basic pattern shown below is tiled three times.
Basic mux tile, poly/implant
Basic mux tile, M1
(Sorry for the misalignment of the contact layer, this was a quick tracing and as long as I was able to make sense of the circuit I didn't bother polishing it up to look pretty!)

The resulting schematic:

Schematic of muxes

M2 was used for some short-distance routing as well as OGATE, power/ground busing, and the SRAM bit lines.

M2 and M2-M3 vias


M3 was used for OGATE, power busing, SRAM word lines, the mask-programmed muxes, and the tri-state bus within the final mux.



M3 and M3-M4 vias

And finally, M4. I never found out what the leftmost power line went to, it didn't appear to be VCCINT or ground but was obviously power distribution. There's no reason for VCCIO to be running down the middle of the array so maybe VCCAUX? Reversing the global config logic may provide the answer.

M4
A bit of trial and error poking bits in bitstreams was sufficient to determine the ordering of signals. From right to left we have FB1's GPIO pins, the input-only pin, FB2's GPIO pins, then FB1's flipflops and finally FB2's flipflops.

Now that I had good intel on the target, it was time to plan the strike!

Part 2, The Attack, is here.

by Andrew Zonenberg (noreply@blogger.com) at March 31, 2014 11:34 PM

Elphel

Elphel, inc. on trip to Geneva, Switzerland.

University of Geneva

Monday, April 14, 2014 – 18:15 at Uni-Mail, room MR070, University of Geneva.

Elphel, Inc. is giving a conference entitled “High Performance Open Hardware for Scientific Applications”. Following the conference, you will be invited to attend a round-table discussion to debate the subject with people from Elphel and Javier Serrano from CERN.

Javier studied Physics and Electronics Engineering. He is the head of the Hardware and Timing section in CERN’s Beams Control group, and the founder of the Open Hardware Repository. Javier has co-authored the CERN Open Hardware Licence. He and his colleagues have also recently started contributing improvements to KiCad, a free software tool for the design of Printed Circuit Boards

Elphel Inc. is invited by their partner specialized in stereophotogrammetry applications – the Swiss company Foxel SA, from April 14-21 in Geneva, Switzerland.

You can enjoy a virtual tour of the Geneva University by clicking on the links herein below:
(make sure to use the latest version of Firefox or Chromium to view the demos)

Foxel’s team would be delighted to have all of Elphel’s clients and followers to participate in the conference.
A chat can also be organized in the next few days. Please contact us at Foxel SA.

If you do not have the opportunity to visit us in Geneva, the conference will be streamed live and the recording will be available.

by Alexandre at March 31, 2014 06:04 PM

Andrew Zonenberg, Silicon Exposed

Laser IC decapsulation experiments

Laser decapsulation is commonly used by professional shops to rapidly remove material before finishing with a chemical etch. Upon finding out that one of my friends had purchased a laser cutting system, we decided to see how well it performed at decapping.

Infrared light is absorbed strongly by most organics as well as some other materials such as glass. Most metals, especially gold, reflect IR strongly and thus should not be significantly etched by it. Silicon is nearly transparent to IR. The hope was that this would make laser ablation highly selective for packaging material over the die, leadframe, and bond wires.

Unfortunately I don't have any in-process photos. We used a raster scan pattern at fairly low power on a CO2 laser with near-continuous duty cycle.

The first sample was a Xilinx XC9572XL CPLD in a 44-pin TQFP.

Laser-etched CPLD with die outline visible
If you look closely you can see the outline of the die and wire bonds beginning to appear. This probably has something to do with the thermal resistances of gold bonding wires vs silicon and the copper leadframe.

Two of the other three samples (other CPLDs) turned out pretty similar except the dies weren't visible because we didn't lase quite as long.
Laser-etched CPLD without die visible
I popped this one under my Olympus microscope to take a closer look.

Focal plane on top of package
Focal plane at bottom of cavity
Scan lines from the laser's raster-etch pattern were clearly visible. The laser was quite effective at removing material at first glance, however higher magnification provided reason to believe this process was not as effective as I had hoped.
Raster lines in molding compound
Raster lines in molding compound
Most engineers are not aware that "plastic" IC packages are actually not made of plastic. (The curious reader may find the "epoxy" page on siliconpr0n.org a worthwhile read).

Typical "plastic" IC molding compounds are actually composite materials made from glass spheres of varying sizes as filler in a black epoxy resin matrix. The epoxy blocks light from reaching the die and interfering with circuits through induced photocurrents and acts to bond the glass together. Unfortunately the epoxy has a thermal expansion coefficient significantly different from that of the die, so glass beads are added as a filler to counteract this effect. Glass is usually a significant percentage (80 or 90 percent) of the molding compound.

My hope was that the laser would vaporize the epoxy and glass cleanly without damaging the die or bond wires. It seems that the glass near the edge of the beam fused together, producing a mess which would be difficult or impossible to remove. This effect was even more pronounced in the first sample.

The edge of the die stood out strongly in this sample even though the die is still quite a bit below the surface. Perhaps the die (or the die-attach paddle under it) is a good thermal conductor and acted to heatsink the glass, causing it to melt rather than vaporize?
The first sample seen earlier in the article, showing the corner of the die
A closeup showed a melted, blasted mess of glass. About the only things able to easily remove this are mechanical abrasion or HF, both of which would probably destroy the die.
Fused glass particles
Fused glass particles

I then took a look at the last sample, a PIC18F4553. We had etched this one all the way down to the die just to see what would happen.
Exposed PIC18F4553 die
Edge of the die showing bond pads
Most bond wires were completely gone - it appeared that the glass had gotten so hot that it melted the wires even though they did not absorb the laser energy directly. The large reddish sphere at the center of the frame is what remains of a ball bond that did not completely vanish.

The surface of the die was also covered by fused glass. No fine structure at all was visible.

Looking at the overview photo, reddish spots were visible around the edge of the die and package. I decided to take a closer look in hopes of figuring out what was going on there.
Red glass on the edge of the hole
I was rather confused at first because there should have only been metal, glass, and plastic in that area - and none of these were red. The red areas had a glassy texture to them, suggesting that they were partly or mostly made of fused molding compound.

Some reading on stained glass provided the answer - cranberry glass. This is a colloid of gold nanoparticles suspended in glass, giving it color from scattering incoming light.

The normal process for making cranberry glass is to mix Au2O3 in with the raw materials before smelting them together. At high temperatures the oxide decomposes, leaving gold particles suspended in the glass. It appears that I've unintentionally found a second synthesis which avoids the oxidation step: flash vaporization of solid gold and glass followed by condensation of the vapor on a cold surface.

by Andrew Zonenberg (noreply@blogger.com) at March 31, 2014 03:37 PM

March 29, 2014

Video Circuits

Art électronique

As a British person from 2014 I have never wanted to be a young French kid in in a polo neck from 1978 until now, this is pretty much my dream audio visual studio, featuring some lovely  shots of the EMS Spectron video synthesizer in action as well as a whole host of other nice EMS and custom rack gear for sound and video experimentation. http://www.ina.fr/video/CPA7805092804
Thanks to Jeff my good friend from across the seas for digging this video up!









by Chris (noreply@blogger.com) at March 29, 2014 06:19 AM

March 28, 2014

ZeptoBARS

SiTime SiT8008 - MEMS oscillator : weekend die-shot

SiTime SiT8008 is a programmable MEMS oscillator reaching quartz precision but with higher reliability and lower g-sensitivity. Also SiTime is one of companies who received investments from Rosnano - Russian high-tech investment fund.

Photo of MEMS die puzzled us for quite some time. Is it some sort of integrated SAW/STW resonator?

The trick is that to reach maximum Q-factor (up to ~186'000 according to patents) MEMS resonator must operate in vacuum. So they package resonator _inside_ the die in hydrogen atmosphere, then anneal it in vacuum so that hydrogen escapes through silicon. So we see here only a cap with contacts to "buried" MEMS resonator. We were unable to reach the resonator itself without x-ray camera or ion mill.

MEMS die size - 457x454 µm.

Thankfully relevant patents were specified right on the die : US6936491 US7514283 US7075160 US7750758 :)



Digital die contains LC PLL and digital logic for one-off frequency programming and temperature compensation.
Die size - 1409x1572 µm.



Poly level:


Standard cells ~250nm techology.

March 28, 2014 10:54 PM

Geoffrey L. Barrows - DIY Drones

Visually stabilizing a Crazyflie, including in the dark

I've been working on adding visual stabilization to a Crazyflie nano quadrotor. I had two goals- First is to achieve the same type of hover that we demonstrated several years ago on an eFlite 'mCX. Second is to do so in extremely low light levels including in the dark, borrowing inspiration from biology. We are finally getting some decent flights.

Above is a picture of our sensor module on a Crazyflie. The Crazyflie is really quite small- the four motors form a square about 6cm on a side. The folks at Bitcraze did a fantastic job assembling a virtual machine environment that makes it easy to modify and update the Crazyflie's firmware. Our sensor module comprises four camera boards (using an experimental low-light chip) connected to a main board with an STM32F4 ARM running. These cameras basically grab optical flow type information from the horizontal plane and then estimate motion based on global optical flow patterns. These global optical flow patterns are actually inspired from similar ones identified in fly visual systems.The result is a system that allows a pilot to maneuver the Crazyflie using control sticks, and then will hover in one location when the control sticks are released.

Below is a video showing three flights. The first flight is indoors, with lights on. The second is indoors, with lights off but with some leaking light. The third is in the dark, but with IR LEDs mounted on the Crazyflie to work in the dark.

There is still some drift, especially in the darker environments. I've identified a noise issue on the sensor module PCB, and already have a new PCB in fab that should clean things up.

by Geoffrey L. Barrows at March 28, 2014 02:44 PM

March 27, 2014

Video Circuits

Tiny Dazzler (Andy Puls)

 Tiny Dazzler (Andy Puls) has some awesome video things going on over at his blog and here
The first video is of an impressive CMOS based video pattern generator 


































by Chris (noreply@blogger.com) at March 27, 2014 07:32 AM

March 26, 2014

Bunnie Studios

Name that Ware, March 2014

The Ware for March 2014 is shown below.

I came across this at a gray market used parts dealer in Shenzhen. Round, high density circuit boards with big FPGAs and ceramic packages tend to catch my eye, as they reek of military or aerospace applications.

I have no idea what this ware is from, or what it’s for, so it should be interesting judging the responses — if there is no definitive identification, I’ll go with the most detailed/thoughtful response.

by bunnie at March 26, 2014 07:11 PM

Winner, Name that Ware February 2014

The Ware for February 2014 is an SPAC module from the racks of a 3C Series 16 computer, made by Honeywell (formerly 3C). According to the Ware’s submitter, the computer from which it came was either a DDP-116 or DDP-224 computer, but the exact identity is unknown as it was acquired in the 70′s and handed down for a generation.

As for a winner, it’s tough to choose — so many thoughtful answers. I’ll go the easy route and declare jd the winner for having the first correct answer. Congrats, and email me for your prize!

by bunnie at March 26, 2014 07:11 PM

Richard Hughes, ColorHug

GNOME Software on Ubuntu (II)

So I did a bit more hacking on PackageKit, appstream-glib and gnome-software last night. We’ve now got screenshots from Debian (which are not very good) and long application descriptions from the package descriptions (which are also not very good). It works well enough now, although you now need PackageKit from master as well as appstream-glib and gnome-software.

Screenshot_UbuntuSaucy_2014-03-26_15:27:33

Screenshot_UbuntuSaucy_2014-03-26_15:31:05

Screenshot_UbuntuSaucy_2014-03-26_15:55:45

This is my last day of hacking on the Ubuntu version, but I’m hopeful other people can take what I’ve done and continue to polish the application so it works as well as it does on Fedora. Tasks left to do include:

  • Get aptcc to honour the DOWNLOADED filter flag so we can show applications in the ‘Updates’ pane
  • Get aptcc to respect the APPLICATION filter to speed up getting the installed list by an order of magnitude
  • Get gnome-software (or appstream-glib) to use the system stock icons rather than the shitty ones shipped in the app-install-data package
  • Find out a way to load localized names and descriptions from the app-install-data gettext archive and add support to appstream-glib. You’ll likely need to call dgettext(), bindtextdomain() and bind_textdomain_codeset()
  • Find out a way how to populate the ‘quality’ stars in gnome-software, which might actually mean adding more data to the app-install desktop files. This is kind of data we need.
  • Find out why aptcc sometimes includes the package summary in the licence detail position
  • Improve the package details to human readable code to save bullet points and convert to a UTF-8 dot
  • Get the systemd offline-updates code working, which is completely untested
  • Find out why aptcc seems to use a SHA1 hash for the repo name (e.g. pkcon repo-list)
  • Find out why aptcc does not set the data part of the package-id to be prefixed with installed: for installed packages

If you can help with any of this, please grab me on #PackageKit on freenode.

by hughsie at March 26, 2014 04:17 PM

March 25, 2014

Richard Hughes, ColorHug

GNOME Software on Ubuntu

After an afternoon of hacking on appstream-glib, I can show the fruits of my labours:

1

This needs gnome-software and appstream-glib from git master (or gnome-apps-3.14 in jhbuild) and you need to manually run PackageKit with the aptcc backend (--enable-aptcc).

2

It all kinda works with the data from /usr/share/app-install/*, but the icons are ugly as they are included in all kinds of sizes and formats, and also there’s no long descriptions except for the two (!) installed applications new enough to ship local AppData files.Also, rendering all those svgz files is muuuuch slower than a pre-processed png file like we ship with AppStream. The installed view also seems not to work. Only the C locale is present too, as I’ve not worked out how to get all the translations from an external gettext file in appstream-glib. I’d love to know how the Ubuntu software center gets long descriptions and screenshots also. But it kinda works. Thanks.

by hughsie at March 25, 2014 05:41 PM

March 24, 2014

Michele's GNSS blog

R820T with 28.8 MHz TCXO

I recently looked around for tools to use as low cost spectrum scanners, being the objective frequency range 400 MHz to 1.7 GHz (incidentally, DVB-T and GPS).
Of course rtl-sdr is an attractive option so I dusted off some dongles I had bought 6 months ago in China and played again with them, coming to the conclusion that I really like it especially after its main limitation is overcome :)

The 28.8 MHz crystal is quite poor. I asked Takuji for a TCXO but he said he emptied his stock rapidly. Of course a replacement is nowhere to be found on the big distribution (Digikey, Mouser, Farnell, RS, etc..), so I went to an old time acquaintance at Golledge and, despite having to order 100 pieces, my request was fulfilled. Well I modified a few RTL-SDR and I am now left with 90-something TCXOs, so if anybody needs a bunch just drop an email to sdr at onetalent-gnss dot com (beware, I will ask you 8 EUR a piece plus 20 EUR shipping and handling for most locations). After all, dongles look quite good with the new crystal:
Figure 1: RTL-SDR with 28.8 MHz TCXO (Golledge GTXO-92)

I measured the frequency deviation with my simple GPS software receiver and I happy to report that it is within spec, bounded to 2ppm. By the way, I tried using other GNSS software receivers and will write about my experience in another post soon.

On the frequency plan side, the R820T combined with the RTL2832U is great for GPS. Most people would use it with an active antenna, where the LNA solves the problem of losses due to the impedance mismatch (50 against 75 ohm) and the noise figure of the tuner (3.5 dB according to datasheet).
The frequency plan with an IF of 3.57 MHz solves elegantly the problem of LO feedthrough and I/Q unbalance typical of ZIF tuner. The IF is recovered automatically in the digital domain by the demodulator so it does not appear in the recorded file. 8bit I/Q recording at 2.048 Msps is more than sufficient for GPS and I also tracked Galileo E1B/C with it (despite some obvious power loss due to the narrow filter band). In my tests, I used a Dafang technology DF5225 survey antenna and the signal time plot shows that 5 bits are actually exercised. I powered the antenna with 3.3V from a Skytraq Venus8 (Ducat10 with S1216F8) through an all-by-one DC blocked passive 4-way splitter/combiner (6 dB unavoidable loss) from ETL-systems.

Figure 2, 3 and 4: Power spectrum, histogram, and time series at L1.

I posted three GPS files here:
https://app.box.com/s/wxizs3p7zu8x2jmbnzod
https://app.box.com/s/xvfabkqfkmehg5osa3ra
https://app.box.com/s/dqrel15mwj73xijflkma

Since someone asked for it, here are the tracking results of Galileo E19 plotted after the fact with Matlab:





and Galileo E20:






More to come later,
Michele

by noreply@blogger.com (Michele Bavaro) at March 24, 2014 10:49 PM

Andrew Zonenberg, Silicon Exposed

Microchip PIC32MZ process vs PIC32MX

Those of you keeping an eye on the MIPS microcontroller world have probably heard of Microchip's PIC32 series parts: MIPS32 CPU cores licensed from MIPS Technologies (bought by Imagination Technologies recently) paired with peripherals designed in-house by Microchip.
Although they're sold under the PIC brand name they have very little in common with the 8/16 bit PIC MCUs. They're fully pipelined processors with quite a bit of horsepower.

The PIC32MX family was the first to be introduced, back in 2009 or so. They're a MIPS M4K core (for the 64/100 pin parts) or M14K (for the 28/44 pin parts) at up to 80 MHz and max out at 128 KB of SRAM and 512 KB of NOR flash plus a fairly standard set of peripherals.

PIC32MX microcontroller

Somewhat disappointingly, the PIC32MX MMU is fixed mapping and there is no external bus interface. Although there is support for user/kernel privilege separation, all userspace code shares one address space. Another minor annoyance is that all PIC32MX parts run from a fixed 1.8V on-die LDO which normally cannot (the 300 series is an exception) be disabled or bypassed to run from an external supply.

The PIC32MZ series is just coming out now. They're so new, in fact that they show as "future product" on Microchip's website and you can only buy them on dev boards, although I'm told by around Q3-Q4 of this year they'll be reaching distributors. They fix a lot of the complaints I have with PIC32MX and add a hefty dose of speed: 200 MHz max CPU clock and an on-die L1 cache.

PIC32MZ microcontroller

On-chip memory in the PIC32MZ is increased to up to 512 KB of SRAM and a whopping 2 MB of flash in the largest part. The new CPU core has a fully programmable MMU and support for an external bus interface capable of addressing up to 16MB of off-chip address space.

I'm a hacker at heart, not just a developer, so I knew the minute I got one of these things I'd have to tear it down and see what made it tick. I looked around for a bit, found a $25 processor module on Digikey, and picked it up.

The board was pretty spartan, which was fine by me as I only wanted the chip.

PIC32MZ processor module
Less than an hour after the package had arrived, I had the chip desoldered and simmering away in a beaker of sulfuric acid. I had done a PIC32MX340F512H a few days previously to provide comparison shots.

Without further ado, here's the top metal shots:

PIC32MX340F512H
PIC32MZ2048ECH
These photos aren't to scale, the MZ is huge (about 31.9 mm2). By comparison the MX is around 20.

From an initial impression, we can see that although both run at the same core voltage (1.8V) the MZ is definitely a new, significantly smaller fab process. While the top layer of the MX is fine-pitch signal routing, the top layer of the MZ is (except in a few blocks which appear to contain analog circuitry) completely filled with power distribution routing.

Top layer closeups of MZ (left), MX (right), same scale

Thick power distribution wiring on the top layer is a hallmark of deep-submicron processes, 130 nm and below. Most 180 nm or larger devices have at least some signal routing on the top layer.

Looking at the mask revision markings gives a good hint as to the layer count and stack-up.

Mask rev markings on MZ (left), MX (right), same scale
The MZ appears to be one thick aluminum layer and five thin copper layers for a total of six, while the MX is four layers and probably all aluminum.

Enough with the top layer... time to get down! Both samples were etched with HF until all metal and poly was removed.

The first area of interest was the flash.

NOR flash on MZ (left), MX (right), different scales
Both arrays appear to be the same standard NOR structure, although the MZ's array is quite a bit denser: the bit cell pitch is 643 x 270 nm (0.173 μm2/bit) while the MX's is 1015 x 676 nm (0.686 μm2/bit). The 3.96x density increase suggests a roughly 2x process shrink.

The white cylinders littering the MX die are via plugs, most likely tungsten, left over after the HF etch. The MZ appears to use a copper damascene process without via plugs, although since no cross section was performed details of layer thicknesses etc are unavailable.

The next target was the SRAM.

6T SRAM on MZ (left), MX (right), different scales
Here we start to see significant differences. The MX uses a fairly textbook 6T "doughnut + H" SRAM structure while the MZ uses a more modern lithography-optimized pattern made of all straight lines with no angles, which is easier to etch. This kind of bit cell is common in leading-edge processes but this is the first time I've seen it in a commodity MCU.

Cell pitch for the MZ is 1345 x 747 nm (1.00 μm2/bit) while the MX is 1895 x 2550 nm (4.83 μm2/bit). This is a 4.83x increase in density.

The last area of interest was the standard cell array for the CPU.

Closeup of standard cells on MZ (left), MX (right), different scales
Channel length was measured at 125-130 nm for the MZ and 250-260 nm for the MX.

Both devices also had a significant number of dummy cells in the gate array, suggesting that the designs were routing-constrained.

Dummy cells in MZ
Dummy cells in MX

In conclusion, the PIC32MZ is a significantly more powerful 130 nm upgrade to the slower 250 nm PIC32MX family. If Microchip fixes most of the silicon bugs before they launch I'll definitely pick up a few and build some stuff with them.

I wasn't able to positively identify the fab either device was made on however the fill patterns and power distribution structure on the MZ are very similar of the TI AM1707 which is fabricated by TSMC so they're my first guess.

For more info and die pics check out the SiliconPr0n pages for the two chips:

by Andrew Zonenberg (noreply@blogger.com) at March 24, 2014 07:13 PM

Richard Hughes, ColorHug

GNOME Software 3.12.0 Released!

Today I released gnome-software 3.12.0 — with a number of new features and a huge number of bugfixes:

gnome-software-312-main

I think I’ve found something interesting to install — notice the auto-generated star rating which tells me how integrated the application is with my environment (i.e. is it available in my language) and if the application is being updated upstream. Those thumbnails look inviting:

gnome-software-312-details

We can continue browsing while the application installs — also notice the ‘tick’ — this will allow me to create and modify application folders in gnome-shell so I can put the game wherever I like:

gnome-software-312-installing

The updates tab looks a little sad; there’s no update metadata on rawhide for my F20 GNOME 3.12 COPR, but this looks a lot more impressive on F20 or the yet-to-be-released F21. At the moment we’re using the AppData metadata in place of update descriptions there. Yet another reason to ship an AppData file.

gnome-software-312-updates

We can now safely remove sources, which means removing the applications and addons that we installed from them. We don’t want applications sitting around on our computer not being updated and causing dependency problems in the future.

gnome-software-312-sources

Development in master is now open, and we’ve already merged several large patches. The move to libappstream-glib is a nice speed boost, and other more user-visible features are planned. We also need some documentation; if you’re interested please let us know!

by hughsie at March 24, 2014 05:31 PM

March 22, 2014

ZeptoBARS

TI TL431 adjustable shunt regulator : weekend die-shot

TI TL431 in an adjustable shunt regulator often used in linear supplies with external power transistor.
Die size 1011x1013 µm.


March 22, 2014 04:25 PM

March 21, 2014

Geoffrey L. Barrows - DIY Drones

What can bees tell us about seeing and flying at night?

(Image of Megalopta Genalis by Michael Pfaff, linked from Nautilus article)

How would you like your drone to use vision to hover, see obstacles, and otherwise navigate, but do so at night in the presence of very little light? Research on nocturnal insects will (in my opinion) give us ideas on how to make this possible.

A recent article in Nautilus describes the research being performed by Lund University Professor Eric Warrant on Megalopta Genalis, a bee that lives in the Central American rainforest and does it's foraging after sunset and before sunrise when light levels are low enough to keep most other insects grounded, but just barely adequate for the Megalopta to perform all requisite bee navigation tasks. This includes hovering, avoiding collisions with other obstacles, visually recognizing it's nest, and navigating out and back to it's nest by recognizing illumination openings in the branches above. Deep in the rainforest the light levels are much lower than out in the open- Megalopta seems able to perform these tasks when the light levels are as low as two or three photons per ommatidia (compound eye element) per second!

Professor Warrant and his group theorize that the Megalopta's vision system uses "pooling" neurons that combine the acquired photons from groups of ommatidia to obtain the benefit of higher photon rates, a trick similar to how some camera systems extend their ability to operate in low light levels. In fact, I believe even the PX4flow does this to some extent when indoors. The "math" behind this trick is sound, but what is missing is hard neurophysiological evidence of this in the Megalopta, which Prof. Warrant and his colleagues are tying to obtain. As the article suggests, this work is sponsored in part by the US Air Force.

You have to consider the sheer difference between the environment of Megalopta and the daytime environments in which we normally fly. On a sunny day, the PX4flow sensor probably acquires around 1 trillion photons per second. Indoors, that probably drops to about 10 billion photons per second. Now Megalopta has just under 10,000 ommatidia, so at 2 to 3 photons per ommatidia per second it experiences around 30,000 photons per second. That is a difference of up to seven orders of magnitude, which is even more dramatic when you consider that Megalopta's 30k photons are acquired omnidirectionally, and not just over a narrow field of view looking down.

by Geoffrey L. Barrows at March 21, 2014 07:47 PM

March 19, 2014

Richard Hughes, ColorHug

AppStream Logs, False Positives and You

Quite a few people have asked me how the AppStream distro metadata is actually generated for thier app. The actual extraction process isn’t trivial, and on Fedora we also do things like supply missing AppData files for some key apps, and replacing some upstream screenshots on others.

In order to make this more transparent, I’m going to be uploading the logs of each generation run. If you’ve got a few minutes I’d appreciate you finding your application there and checking for any warnings or errors. The directory names are actually Fedora package names, but usually it’s either 1:1 or fairly predictable.

If you’ve got a application that’s being blacklisted when it shouldn’t be, or a GUI application that’s in Fedora but not in that list then please send me email or grab me on IRC. The rules for inclusion are here. Thanks.

by hughsie at March 19, 2014 10:41 AM

March 18, 2014

Richard Hughes, ColorHug

Announcing Appstream-Glib

For a few years now Appstream and AppData adoption has been growing. We’ve got client applications like GNOME Software consuming the XML files, and we’ve got several implementations of metadata generators for a few distros now. We’ve also got validation tools we’re encouraging upstream applications to use.

The upshot of this was the same code was being duplicated across 3 different projects of mine, all with different namespaces and slightly different defined names. Untangling this mess took a good chunk of last week, and I’ve factored out 2759 lines of code from gnome-software, 4241 lines from createrepo_as, and the slightly less impressive 178 lines from appdata-tools.

The new library has a simple homepage, and so far a single release. I’d encourage people to check this out and provide early comments, as as soon as gnome-software branches for 3-12 I’m going to switch it to using this. I’m also planning on switching createrepo_as and and appdata-tools for the next releases too so things like jhbuild modulesets need to be updated and tested by somebody.

Appstream-Glib 0.1.0 provides just enough API to make sense for a first release, but I’m going to be continuing to abstract out useful functionality from the other projects to share even more code. I’ve spent a few long nights profiling the XML parsing code, and I’m pleased to say the load time of gnome-software is 160ms faster with this new library, and createrepo_as completes the metadata generation 4 minutes faster. Comments, suggestions and patches very welcome. There’s a Fedora package linked from the package review bug if you’d rather test that. Thanks.

by hughsie at March 18, 2014 02:20 PM

ZeptoBARS

TI LM393 - dual comparator : weekend die-shot

TI LM393 - dual comparator, one of old workhorses of electronics.
Die size 704x748 µm.


March 18, 2014 09:34 AM

March 16, 2014

ZeptoBARS

Ti TS5A3159 - 1Ω analog switch : weekend die-shot

Ti TS5A3159 is an 1.65-5V 2:1 analog switch with ~1Ω matched channel resistance and "break-before-make" feature.
Die size 1017x631 µm, 1µm technology.


March 16, 2014 06:15 AM

March 06, 2014

Video Circuits

Some of my stills

Here are some still from some diy video synth experiments, slow progress




by Chris (noreply@blogger.com) at March 06, 2014 10:17 AM

Akirasrebirth

Akirasrebirth  has some nice circuit bent video stuff and an arduino based video project here 














by Chris (noreply@blogger.com) at March 06, 2014 10:15 AM

Victoria Keddie

Victoria Keddie's beautiful new piece Helios Electro, see an older post about her work here

"Sound and video feedback systems, signal generation, DAC, wavetek, oscillators, T- resonator, surveillance camera, CRT monitor feedback, For-A, and other things."

too cool.


by Chris (noreply@blogger.com) at March 06, 2014 09:52 AM

March 03, 2014

Zedstar

nibble.io

I have been working on applications that can transfer data using sound on mobile and embedded devices. The first product I have created is called NibblePin which is specifically designed to exchange BBM PINs.  All signal processing is done on the device and the implementation is portable across a range of embedded hardware.  The first port of this application is for Blackberry 10 but I will work on some other mobile OSes and platforms. For further updates check out http://nibble.io

by john at March 03, 2014 08:44 PM

February 24, 2014

OggStreamer

#oggstreamer – Firmware Release Candidate 3 – released ;)

it has been a while since there was an official Firmware Update for the OggStreamer – now it is time to release RC3 (maybe the last Release Candidate before V1.0)

Whats new:

  • Working WebGUI-Firmware upload (until RC3 you had to use the Command Line Tools)
  • Support for MP3 completed
  • Patches for AudioDSP are now working (fixes high pitch mp3 issues, and incorrect samplerates)
  • Support for DynDNS – Services like FreeDNS, DynDNS and many more
  • Support for the ShoutCast (ICY) and legacy IceCast1 Protocol
  • Cleaner Code for WebGUI – now using libcgi to be compliant to GPL

Where to get it:

The upload_firmware.sh script and the update-rc3.tgz can be found in the repositories if you want to update from a UNIX like environment

For windows download the updater tool here

Once you installed RC3 on your OggStreamer future updates can be done via the WebGUI ;)


by oggstreamer at February 24, 2014 04:40 PM

Richard Hughes, ColorHug

GNOME 3.12 on Fedora 20

I’ve finished building the packages for GNOME 3.11.90. I’ve done this as a Fedora 20 COPR. It’s probably a really good idea to test this in a VM rather than your production systems as it’s only had a small amount of testing.

If it breaks, you get to keep all 132 pieces. It’s probably also not a good idea to be asking fedora-devel or fedoraforums for help when using these packages. If you don’t know how to install a yum repo these packages are not for you.

Comments and suggestions, welcome. Thanks.

by hughsie at February 24, 2014 02:44 PM

February 21, 2014

Free Electrons

Free Electrons at Embedded World 2014, Nuremberg, Germany

Embedded World 2014, Germany

Embedded World is the world’s largest trade show about embedded systems. In 2013, it attracted around 900 exhibitors, over 22,000 visitors and almost 1,500 congress participants.

This year, Free Electrons will be represented by our CEO Michael Opdenacker. This should be a great opportunity for us to understand our customers better, by meeting embedded system makers, by seeing what their needs are and what technologies they use. It will also be an opportunity to meet well known members of the technical community. In particular, here are a few well know people who are going to speak at the congress:

Don’t hesitate to contact us if you are attending this event too and are interested in knowing Free Electrons better, for business, partnership or even career opportunities!

by Michael Opdenacker at February 21, 2014 05:55 AM

February 20, 2014

OggStreamer

#oggstreamer – Talk about the OggStreamer and OpenHardware @ MediaLab Prado

I had the chance to present the OggStreamer at Medialab Prado and I was also talking about some random thoughts about OpenHardware and the Open Technology Laboratory in Austria http://otelo.or.at

medialabprado-oggstreamer(click on the image to get to the video)

Many thanks to the people at MediaLab who made this talk possible ;)


by oggstreamer at February 20, 2014 04:02 PM

February 17, 2014

Altus Metrum

keithp&#x27;s rocket blog: MicroPeak Approved for NAR Contests

MicroPeak Approved for NAR Contests

The NAR Contest Board has approved MicroPeak for use in contests requiring a barometric altimeter starting on the 1st of April, 2014. You can read the announcement message on the contestRoc Yahoo message board here:

Contest Board Approves New Altimeter

The message was sent out on the 30th of January, but there is a 90 day waiting period after the announcement has been made before you can use MicroPeak in a contest, so the first date approved for contest flights is April 1. After that date, you should see MicroPeak appear in Appendix G of the pink book, which lists the altimeters approved for contest use

Thanks much to the NAR contest board and all of the fliers who helped get MicroPeak ready for this!

February 17, 2014 08:45 AM

ZeptoBARS

FTDI FT232RL: real vs fake

For quite some time when you buy FTDI FT232RL chips from shady suppliers you have a good chance of getting mysteriously buggy chip which only works with drivers 2.08.14 or earlier. We've got a pair of such FTDI FT232RL chips - one genuine and one fake and decided to check if there is an internal difference between them. On the following photo - left one is genuine, right one is fake. One can notice difference in marking - on genuine chip it's laser engraved while on buggy it is printed (although this is not a universal distinguishing factor for other chips).



Genuine FT232RL



After etching metal layers:


Let's take a closer look at different parts of the chip. Here are rows of auto-synthesized standard cells:


ROM? EEPROM?:


SRAM:


Fake FT232RL

This chip is completely different! We can notice right away that number of contact pads is much higher than needed. Chip has marking "SR1107 2011-12 SUPEREAL"


After etching metal layers:


Closer look at standard cells:


Different block of the chip has different look of standard cells. It is likely that some modules were licensed(?) as layout, not HDL:


First type of SRAM:


Second type of SRAM:


Finally - mask ROM programmed on poly level, so we can clearly see firmware data:


Comparison of manufacturing technology

ChipDie sizeTechnologySRAM cell area
FTDI FT232RL3288x3209 µm600-800 nm123 µm2
Fake FT232RL3489x3480 µm500 nm68 µm2 and 132 µm2

While technology node is comparable, it seems that original FT232RL used less metals, hence much lower logic cell density. Fake chip is slightly larger despite slightly more advanced technology.

Resume

It seems that in this case Chinese designers implemented protocol-compatible "fake" chip, using mask-programmable microcontroller. This way they only needed to redo 1 mask - this is much cheaper than full mask set, and explains a lot of redundant pads on the die. Fake chip was working kinda fine until FTDI released drivers update, which were able to detect fake chips via USB and send only 0's in this case. It was impossible to foresee any possible further driver checks without full schematic recovery and these hidden tricks saved FTDI profits.

What's the economic reason of making software fake of well-known chip instead of making new one under your own name? This way they don't need to buy USB VID, sign drivers in Microsoft, no expenses on advertisement. This fake chip will be used right away in numerous mass-manufactured products. New chip will require designing new products (or revisions) - so sales ramp up will happen only 2-3 years later. Die manufacturing cost is roughly the same for both dies (~10-15 cents) .

From now on one should pay more and more attention when working with small shady distributors. Their slightly lower price could cause numerous hours of debugging fun.

February 17, 2014 07:29 AM

February 15, 2014

Altus Metrum

keithp&#x27;s rocket blog: Altos1.3.2

AltOS 1.3.2 — Bug fixes and improved APRS support

Bdale and I are pleased to announce the release of AltOS version 1.3.2.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a minor release of AltOS, including bug fixes for TeleMega, TeleMetrum v2.0 and AltosUI .

AltOS Firmware — GPS Satellite reporting and APRS improved

Firmware version 1.3.1 has a bug on TeleMega when it has data from more than 12 GPS satellites. This causes buffer overruns within the firmware. 1.3.2 limits the number of reported satellites to 12.

APRS now continues to send the last known good GPS position, and reports GPS lock status and number of sats in view in the APRS comment field, along with the battery and igniter voltages.

AltosUI — TeleMega GPS Satellite, GPS max height and Fire Igniters

AltosUI was crashing when TeleMega reported that it had data from more than 12 satellites. While the TeleMega firmware has been fixed to never do that, AltosUI also has a fix in case you fly a TeleMega board without updated firmware.

GPS max height is now displayed in the flight statistics. As the u-Blox GPS chips now provide accurate altitude information, we’ve added the maximum height as computed by GPS here.

Fire Igniters now uses the letters A through D to label the extra TeleMega pyro channels instead of the numbers 0-3.

February 15, 2014 10:45 AM

February 14, 2014

Video Circuits

Bermuda Triangle Lovely Audio Visual Extravaganza featuring James Alec Hardy

He so James gave me the heads up about this great event this Saturday for the benefit of Resonance104.4fm

He will be presenting the first performance of his Ziggurat 00120140215 broadcast system around 20:30ish here is a little sneak preview


and the rest of the audio visual line up looks great too

When: Saturday 15th February 2014 8pm - 2am
Where: Roxy Bar & Screen, 128-132 Borough High St, London SE1 1LB
Door: £5 donation to Resonance104.4fm
more info here 





by Chris (noreply@blogger.com) at February 14, 2014 06:59 AM

Carol Goss


Carol Goss has been active in abstract video work from very early on, have a look at her excellent website for more information www.improvart.com, here are some selected still's from her site just to give an initial idea of the range of her work.



















Photo of Carol Goss in live performance at Joseph Papp's Public Theater, NYC 1978.

















TOPOGRAPHY
Carol Goss - Paik-Abe Synthesizer
Paul Bley - Electric Piano
Bill Connors -Electric Guitar
















S-CONSTRUCT
Carol Goss - Computer Animation
Don Preston - Audio Synthesizer

















KUNDALINI
Carol Goss / Video Feedback
Perry Robinson / Clarinet
Badal Roy + Nana Vasconcelos / Percussion

















by Chris (noreply@blogger.com) at February 14, 2014 06:44 AM

February 12, 2014

Andrew Zonenberg, Silicon Exposed

Process overview: UMC 180nm eNVM

I've been reverse engineering a programmable logic device (Xilinx XC2C32A) made on UMC's 180nm eNVM process for the last few months and have been a little light on blog posts. I'm a big fan of the process writeups Chipworks does so I figured I'd try my hand at one ;)

The target devices were packaged in a 32-pin QFN. The first part of the analysis was to sanding the entire package down to the middle of the device and polishing with diamond paste to get a quick overview of the die and packaging stack. (Shops with big budgets normally use X-ray systems for this.) There were a few scratches in the section from sanding, but since the closeups were going to be done on another die it wasn't necessary to polish them out.

Packaged device cross section
Packaged device cross section

Total die thickness including BEOL was just over 300 μm. From the optical image, four layers of metal can be seen. The whitish color hinted that it was aluminum rather than copper, but there's no sense jumping to conclusions since the next die was about to hit the SEM.

A second specimen was depackaged using sulfuric acid, sputtered in platinum to reduce charging, and sectioned using a gallium ion FIB at a slight angle to the east-west routing.

FIB cross section of metal stack
From this image, it is easy to get some initial impressions of the process:
  • The overglass consists of two layers of slightly different compositions, possibly an oxide-nitride stack. 
  • The process is planarized up to metal 4, but not including overglass.
  • Metal has an adhesion/barrier layer at the top and bottom and not the sides, and is wider at the bottom than the top. This rules out damascene patterning and suggests that the metal layers are dry-etched aluminum.
  • Silicide layers are visible above the polysilicon gates and at the source/drain implants.
  • Vias have a much higher atomic number than the metal layers, probably tungsten.
  • Stacked vias are allowed and used frequently.
  • Well isolation is STI.
EDS spectra confirmed all these initial impressions to be correct.

M1 to M3 have pretty much identical stackups except for slight thickness differences: 100nm of Ti-based adhesion/barrier layer, 400-550 nm of aluminum conductor, then another barrier layer of similar composition. M4 is slightly thicker (850 nm aluminum) and the same barrier thickness.

The first overglass layer is the same material (silicon dioxide) as ILD; thickness ranges from slightly below the top of M4 to to 630 nm above the top. The second overglass layer has a slightly higher backscatter yield (EDIT: confirmed by EDS to be silicon nitride) and is about 945 nm thick.

M1-3 pitch is just over 600 nm, while the smallest observed M4 pitch is 1 μm.

EDS spectrum of wire on M4

A closer view of M3 shows the barrier metals in more detail. The barrier is a bit over 100 nm thick normally but thins to about 45 nm underneath upward-facing vias, suggesting that the ILD etch for drilling via holes also damages the barrier material. A small amount (around 30 nm) of sagging is present over the top of downward-facing vias.

Via sidewalls are coated with barrier metal as well, however it is significantly thinner (20 nm vs 100) than the metal layer barrier. The vias themselves are polycrystalline tungsten. Grain structure is clearly visible in the secondary electron image below.

(Note: The structure at left of the image is the edge of the FIB trench and stray material deposited by the ion beam and is not part of the actual device. The lower via is at a slight angle to the section so it was not entirely sliced open.)
M3 with upward/downward vias in cross section.
EDS spectrum of M1-M2 via area
The metal aspect ratio ranges from 3:1 on M1 to 1.5:1 on M4.

Now for the most interesting area - the transistors themselves!

The cross section was taken down the center of the CPLD's PLA OR array between two rows of 6T SRAM cells. Two PMOS transistors from each of two SRAM cells are visible in the closeup below.

Contacted gate pitch is 920 nm, for total cell width (including the 1180 nm of STI trench) of 2.9 μm. Plan view imaging shows total cell dimensions to be 2.9 x 3.3 μm or 9.5 μm2. This is a bit large for the 180 nm node and probably reflects the need to leave space on M1 and M2 for routing SRAM cell output to the programmable logic array.

SRAM cell structure and PLA AND array after metal/poly removal and Dash etch. (P-type implants are raised due to oxide growth from stain.)

Some variability in etch depth and sidewall slope is clearly visible on M1.

The polysilicon layer was hard to see in this view but is probably around 50 nm thick, topped by about 135 nm of cobalt silicide. (Gate oxide thickness isn't visible under SEM at the 180 nm node and I haven't yet had time to prepare a TEM sample.)

Source/drain contacts are made with a 70 nm thick cobalt silicide layer. All vias in the entire process appear to be about the same size (300 nm diameter) however the silicide contact pads are larger (465 nm).

Gate length is almost exactly 180 nm - measurement of the SEM image shows 175 nm +/- 12 nm.

Active area contacts and PMOS transistors
EDS spectrum of active-M1 contact
Closeup of PLA AND array after Dash etch showing PMOS and NMOS channels

Overall, the process seems fairly typical except for its use of aluminum for interconnect. It was a fun analysis and if I have time I may try to do a TEM cross section of both PMOS and NMOS transistors in the future. My main interest in the chip is netlist extraction, though, so this isn't a high priority.

I may also do a second post on the Flash portion of the chip.

EDIT: Decided to post a plan view SEM image of the flash array active area. This is after Dash etch; P-type areas have oxide grown over them. Poly has been stripped. The left-hand flash area is ten bits wide and stores configuration for function block 2's macrocells plus a "valid" bit. The right-hand area stores configuration for FB2's PLA (including both the AND and OR arrays, but not global routing).

Plan view SEM of flash
Finally, I would like to give special thanks to David Frey at the RPI cleanroom for assistance with the FIB cross section.

by Andrew Zonenberg (noreply@blogger.com) at February 12, 2014 02:45 PM

February 11, 2014

Free Electrons

Buildroot meeting and FOSDEM report, Google Summer of Code topics

As we discussed in a recent blog post, two of our engineers participated to the FOSDEM conference early February in Brussels, Belgium. For those interested, many videos are available, such as several videos from the Lameere room, where the embedded related talks were given.

Thomas Petazzoni also participated to the two days Buildroot Developers Meeting after the FOSDEM conference. This meeting gathered 10 contributors to the Buildroot project physically present and two additional remote participants. The event was sponsored by Google and Mind, thanks a lot to them! During those two days, the participants were able to discuss a very large number of topics that are often difficult to discuss over mailing lists or IRC, and a significant work to clean up the oldest pending patches was done. In addition to this, these meetings are also very important to allow the contributors to know each other, as it makes future online discussions and collaborations much easier and fruitful. For more details, see the complete report of the event.

Buildroot Developers Meeting in Brussels

Buildroot Developers Meeting in Brussels

Also, if you’re interested in Buildroot, the project has applied to participate to the next edition of the Google Summer of Code. Two project ideas are already listed on the project wiki, feel free to contact Thomas Petazzoni if you are a student interested in these topics, or if you have other proposals to make for Buildroot.

by Thomas Petazzoni at February 11, 2014 08:30 AM

February 09, 2014

Bunnie Studios

Name that Ware February 2014

The Ware for February 2014 is shown below.

This month’s ware is a handsome bit of retro-computing contributed by Edouard Lafargue (ed _at_ aerodynes.org). The ware was a gift to him from his father.

by bunnie at February 09, 2014 06:59 PM

Winner, Name that Ware January 2014

The Ware for January 2014 was a Chipcom ORnet fiber optic transceiver. Per guessed the ware correctly, and is thus the winner. Congrats, email me for your prize! And thanks again to Mike Fitzmorris for the contribution.

by bunnie at February 09, 2014 06:59 PM

January 31, 2014

Free Electrons

Free Electrons contributions to Linux 3.13

Version 3.13 of the Linux kernel was released by Linus Torvalds on January, 19th 2014. The kernelnewbies.org site has an excellent page that covers the most important improvements and feature additions that this new kernel release brings.

As usual Free Electrons contributed to this kernel: with 121 patches merged in 3.13 on a total of 12127 patches contributed, Free Electrons is ranked 17th in the list of companies contributing to the Linux kernel. We also appeared on Jonathan Corbet kernel contribution statistics at LWN.net, as a company having contributed 1% of the kernel changes, right between Renesas Electronics and Huawei Technologies.

Amongst the contributions we made for 3.13:

  • Standby support added to the Marvell Kirkwood processors, done by Ezequiel Garcia.
  • Various fixes and improvements to the PXA3xx NAND driver, as well as to the Marvell Armada 370/XP clocks, in preparation to the introduction of NAND support for Armada 370/XP, which will arrive in 3.14. Work done by Ezequiel Garcia.
  • Added support for the Performance Monitoring Unit in the AM33xx Device Tree files, which allows to use perf and oprofile on platforms such as the BeagleBone. Work done by Alexandre Belloni.
  • Support added for the I2C controllers on certain Allwinner SOCs, as well as several other cleanups and minor improvements for these SoCs. Work done by Maxime Ripard.
  • Continued the work to get rid of IRQF_DISABLED, as well as other janitorial tasks such as removing unused Kconfig symbols. Work done by Michael Opdenacker.
  • Added support for MSI (Message Signaled Interrupts) for the Armada 370 and XP SoCs. Work done by Thomas Petazzoni.
  • Added support for the Marvell Matrix board (an Armada XP based platform) and the OpenBlocks A7 (a Kirkwood based platform manufactured by PlatHome). Work done by Thomas Petazzoni.

In detail, the patches contributed by Free Electrons are:

by Thomas Petazzoni at January 31, 2014 12:55 PM

Free Electrons at FOSDEM and at the Buildroot Developers Meeting

FOSDEMThis week-end is the first week-end of February, which on the schedule of all open-source developers is always booked for a major event of our community: the FOSDEM conference in Brussels. With several hundreds of talks over two days, this completely free event is one of the biggest event, if not the biggest of the open-source world.

For embedded Linux developers, FOSDEM has quite a few interesting tracks and talks this year: an embedded track, a graphics track (with many embedded related talks, such as talks on Video4Linux, the status of open-source drivers for 2D and 3D graphics on ARM platforms, etc.), and several talks in other tracks relevant to embedded developers. For example, there is one talk about the Allwinner SoCs and the community behind it in one of the main track. Our engineer Maxime Ripard is the Linux kernel maintainer for this family of SoC.

Two Free Electrons engineers will attend FOSDEM: Maxime Ripard and Thomas Petazzoni. Do not hesitate to get in touch with them if you want to discuss embedded Linux or kernel topics!

Also, right after FOSDEM, the Buildroot community is organizing its Developers Meeting, on Monday, 3rd and Tuesday 4th February. This event is sponsored by Google (providing the meeting location) and Mind (providing the dinner), and will take place in the offices of Google in Brussels. Ten Buildroot developers will participate to the meeting in Brussels, as well as a number of others remotely. On Free Electrons side, Thomas Petazzoni will be participating to the meeting. If you are interested in participating, either physically or remotely, do not hesitate to contact Thomas to register. For more details, see the wiki page of the event.

by Thomas Petazzoni at January 31, 2014 12:45 PM

January 27, 2014

Moxie Processor

NetHack in your browser

This is a moxie-rtems port of NetHack running on a modified version of the gdb moxie simulator compiled to javascript with emscripten.


                        Terminal uses canvas
                    




Krister Lagerström is responsible for this marvellous hack.

Also, I suppose this blog entry represents a distribution of some GPL’d source code from GDB, so I need to tell you about the:

And then there’s RTEMS:

And finally NetHack:

by green at January 27, 2014 02:12 AM

January 23, 2014

Andrew Zonenberg, Silicon Exposed

Hardware reverse engineering class

So, it's been a while since I've posted anything and I figured I'd throw up a quick update.

I've been super busy over the winter break working on my thesis, as well as something new: My advisor and I are running a brand-new, experimental course, CSCI 4974/6974 Hardware Reverse Engineering, at Rensselaer Polytechnic Institute (RPI) this spring!

I gave the first lecture for the class last Tuesday and it was very well received. We have a full class - 12 undergraduates and 4 graduates as of this writing. As the TA for the class I'm responsible for (among other things) running labs and preparing samples. I've been running all over campus getting trained on various pieces of equipment, booking lab time for the class, and generally making sure this is going to be an awesome semester for all involved.

Lecture notes are available online on the course website for anyone who wishes to follow along.

Finally, a few peeks at my microprobing setup. I think I need new micropositioners, the backlash on these is pretty terrible. Whenever I adjust the left-right axis, the probe needle rotates by a degree or two.


by Andrew Zonenberg (noreply@blogger.com) at January 23, 2014 06:16 PM

Altus Metrum

keithp&#x27;s rocket blog: Altos1.3.1

AltOS 1.3.1 — Bug fixes and improved APRS support

Bdale and I are pleased to announce the release of AltOS version 1.3.1.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a minor release of AltOS, including bug fixes for TeleMega, TeleMetrum v2.0 and AltosUI .

AltOS Firmware — Antenna down fixed and APRS improved

Firmware version 1.3 has a bug in the support for operating the flight computer with the antenna facing downwards; the accelerometer calibration data would be incorrect. Furthermore, the accelerometer self-test routine would be confused if the flight computer were moved in the first second after power on. The firmware now simply re-tries the self-test several times.

I went out and bought a “real” APRS radio, the Yaesu FT1D to replace my venerable VX 7R. With this in hand, I changed our APRS support to use the compressed position format, which takes fewer bytes to send, offers increased resolution and includes altitude data. I took the altitude data out of the comment field and replaced that with battery and igniter voltages. This makes APRS reasonably useful in pad mode to monitor the state of the flight computer before boost.

Anyone with a TeleMega should update to the new firmware eventually, although there aren’t any critical bug fixes here, unless you’re trying to operate the device with the antenna pointing downwards.

AltosUI — TeleMega support and offline map loading improved.

I added all of the new TeleMega sensor data as possible elements in the graph. This lets you see roll rates and horizontal acceleration values for the whole flight. The ‘Fire Igniter’ dialog now lists all of the TeleMega extra pyro channels so you can play with those on the ground as well.

Our offline satellite images are downloaded from Google, but they restrict us to reading 50 images per minute. When we tried to download a 9x9 grid of images to save for later use on the flight line, Google would stop feeding us maps after the first 50. You’d have to poke the button a second time to try and fill in the missing images. We fixed this by just limiting how fast we load maps, and now we can reliably load an 11x11 grid of images.

Of course, there are also a few minor bug fixes, so it’s probably worth updating even if the above issues don’t affect you.

January 23, 2014 04:39 AM

January 22, 2014

Richard Hughes, ColorHug

AppData status for January

So, it’s been a couple of months since my last post about AppData progress, so about time for one more. These are the stats for Fedora 21 in January (with the stats for Fedora 20 in November in brackets):

Applications in Fedora with long descriptions: 11% (up from 9%)
Applications in Fedora with screenshots: 9% (up from 7%)
Applications in GNOME with AppData: 53% (up from 50%)
Applications in KDE with AppData: 1% (unchanged)
Applications in XFCE with AppData: 0% (unchanged)

If you want to see what your application looks like, but don’t want to run gnome-software from Fedora rawhide or jhbuild, you can check the automatically-generated status page.

Some applications like 0ad and eog look great in the software center, but some like frogr and gbrainy just look sad. As always, full details about AppData here.

by hughsie at January 22, 2014 11:30 AM

January 17, 2014

ZeptoBARS

ST 34C02 - 2048-bit EEPROM : weekend die-shot

STMicroelectronics 34C02 is an 2048-bit EEPROM with hardwire write-protect and I2C interface, typically used as SPD chip in DIMM memory modules.
Die size 1542x1422 µm, 1.2µm half-pitch.


January 17, 2014 04:10 PM

January 16, 2014

Video Circuits

Christia Schubert

Christia Schubert has some nice complex vector work here from 1984/85
Produced using a computer system driving a Soltec/IBM flatbed plotter




by Chris (noreply@blogger.com) at January 16, 2014 03:12 AM

January 14, 2014

Free Electrons

Free Electrons New Year – 2014

This article was published on our quarterly newsletter. A French version also exists.

The Free Electrons team wishes you a Happy New Year for 2014, with plenty of optimism and energy!

We are taking this opportunity to give some news about Free Electrons.

In 2013, Free Electrons significantly increased its contribution to open-source projects, especially at the Linux kernel level.

639 patches integrated in the Linux kernel, mainly to improve support for Marvell ARM processors and Allwinner ARM processors. For all kernel releases published in 2013, Free Electrons has been in the top 30 contributing companies. We now have a significant experience in integrating support for ARM processors in the Linux kernel, and we expect to work more in this area in 2014.

595 patches integrated in the Buildroot embedded Linux build system, in a large number of areas, making Free Electrons the second most important contributor after Buildroot’s maintainer. This effort allows Free Electrons to keep an up-to-date expertise in cross-compilation and build systems.

26 patches integrated in the Barebox bootloader:

22 patches to the Yocto Freescale layer, mainly adding support for the Crystalfontz boards. In the process, a new image type was developed and significant improvements were made to the Barebox recipe.

Several of these contributions, and many other activities, were driven by development and consulting activities in 2013, with mainly:

  • Linux kernel code development, adding and maintaining support for customer ARM processors or boards in the mainline Linux kernel. Especially on Marvell and Freescale processors.
  • Linux kernel, driver development and build system integration for an Atmel SAMA5 based medical device.
  • Development of Linux kernel drivers for radio-frequency transceivers, on an Atmel SAMA5 based home automation platform.
  • Boot time optimization audits.
  • Buildroot consulting and audit.

We have also significantly improved and updated our training courses:

Our training materials remain freely available under a Creative Commons license, including their source code, available from a public Git repository.

Free Electrons continues to believe that participating to conferences is critical to keep its engineers up to date with the latest Linux developments and create connections with the developers of the Linux community which are essential to make our projects progress faster. For this purpose, we participated to a large number of conferences in 2013:

  • FOSDEM 2013, in Brussels, Belgium. Our CTO and engineer Thomas Petazzoni gave a talk about ARM kernel development
  • Buildroot Developers Meeting, Brussels, Belgium. Our engineer Thomas Petazzoni organized and participated to this 2-days meeting, sponsored by Google, to work on Buildroot developments.
  • Embedded Linux Conference 2013 and Android Builders Summit 2013, in San Francisco, United States. Our engineer Gregory Clement gave a talk about the Linux kernel clock framework. Our engineer Thomas Petazzoni gave a talk about ARM kernel development. See also our videos.
  • Linaro Connect Europe 2013, Dublin Ireland. Our engineer Thomas Petazzoni participated to numerous discussions related to support for ARM processors in the Linux kernel.
  • Linux Plumbers 2013, New Orleans, United States. Our engineer Maxime Ripard attended the conference, and participated to discussions around Android and Linux kernel development.
  • Kernel Recipes, Paris, France. Both Free Electrons CEO Michael Opdenacker and CTO Thomas Petazzoni participated to this Linux kernel conference, and Thomas gave two talks: one about ARM kernel development and one about Buildroot.
  • ARM kernel mini-summit 2013, Edinburgh, UK. Our engineers Gregory Clement, Thomas Petazzoni and Maxime Ripard participated to the invitation-only ARM kernel mini-summit. This summit is the key place to discuss and define the next directions for support for ARM processors in the Linux kernel.
  • Embedded Linux Conference Europe, Edinburgh, UK. Gregory Clement gave a talk about about the Linux kernel clock framework and Thomas Petazzoni gave a talk about the Device Tree.
  • Buildroot Developers Meeting, Edinburgh, UK. Our engineer Thomas Petazzoni organized and participated to this 2-days meeting, sponsored by Imagination Technologies, to work on Buildroot development.

A very important development of Free Electrons in 2013 is the addition of a new engineer to our team: Alexandre Belloni joined us in March 2013. Alexandre has a very significant embedded Linux and kernel experience, see his profile.

Now, let’s talk about our plans for 2014:

  • Hire several additional engineers. One of them has already been hired and will join us in April, bringing a significant Linux kernel development experience, including mainline contribution.
  • Our involvement in support for ARM processors in the Linux kernel will grow significantly.
  • Two new training courses will be released: one “Boot time reduction” training course, and an “OpenEmbedded and Yocto” training course.
  • For the first time, we will organize public training sessions (open to individual registration) outside of France.
    • Our next Android system development session in English will happen on April 14-17 in Southampton, UK
    • We are also working on embedded Linux and Kernel and driver development sessions in the USA, to be announced in the next weeks.
    • We also plan to organize embedded Linux and Kernel and driver development sessions in Germany, with German speaking trainers.
    • By the way, our Android system development courses in French will continue to run in Toulouse, but there will also be a session on April 1-4 in Lyon.

    See also the full list of public sessions.

As in 2013, we will participate to several key conferences. We have already planned our participation to: Linux Conf Australia (January 2014), FOSDEM (February 2014), Embedded Linux Conference (April 2014) and the Embedded Linux Conference Europe (October 2014).

You can follow Free Electrons news by reading our blog and by following our quick news on Twitter. We now have a Google+ page too.

Again, Happy New Year!

The Free Electrons team.

by Michael Opdenacker at January 14, 2014 02:57 PM

January 13, 2014

Richard Hughes, ColorHug

For artists, photographers and animators it’s often essential to be working with an accurately color calibrated screen. It’s also important to be able to print accurate colors being sure the hard copy matches what is shown on the display.

The OpenHardware ColorHug Colorimeter device provided an inexpensive way to calibrate some types of screen, and is now being used by over 2000 people. Due to limitations because of the low cost hardware, it does not work well on high-gamut or LED screen technologies which are now becoming more common.

ColorHug Spectro is a new device designed as an upgrade to the original ColorHug. This new device features a mini-spectroraph with UV switched illuminants. This means it can also take spot measurements of paper or ink which allows us to profile printers and ensure we have a complete story for color management on Linux.

I’m asking anyone perhaps interested in buying a device in about 9 months time to visit this page which details all the specifications so far. If you want to pre-order, just send us an email and we’ll add you to the list. If there isn’t at least 100 people interested, the project just isn’t economically viable for us as there are significant NRE costs for all the optics.

Please spread the word to anyone that might be interested. I’ve submitted a talk to LGM to talk about this too, which hopefully will be accepted.

by hughsie at January 13, 2014 04:56 PM

January 12, 2014

ZeptoBARS

Microchip 24LCS52 : weekend die-shot

Microchip 24LCS52 is an 2048-bit EEPROM with I2C interface.
Die size 1880x1880 µm, 2µm half-pitch.



Closer look at charge-pump:

January 12, 2014 02:21 PM

January 11, 2014

Moxie Processor

The Moxie Game Console?

Ok, not quite, but Krister Lagerström recently did something cool..

nethack

That’s NetHack ported to RTEMS running on the moxie based Marin SoC.

It runs on QEMU, via “qemu-system-moxie --nographic --machine marin --kernel nethack.elf“, or on FPGA hardware. I tested with a Nexys 3 Spartan-6 board by simply converting it to an screcord file and sending it via the serial port to the hardware’s boot loader.

Krister implemented the Marin BSP for RTEMS, then ported ncurses and nethack to moxie-rtems. Like many programs with a UNIX heritage, NetHack reads data files from a local file system. RTEMS solves that by providing a simple in-memory filesystem you can initialize with a tar file and link to your ELF executable.

For my part, I had to fix a couple of QEMU bugs and point the moxie-cores tools build scripts to staging git repos until the bugs are fixed upstream. As usual, everything should be here: http://github.com/atgreen/moxie-cores.

Thank you, Krister, and I’m looking forward to the other cool things you have planned!

by green at January 11, 2014 02:06 PM

January 09, 2014

Bunnie Studios

Make: Article on Novena

Recently, the Make: blog ran an article on our laptop project, Novena. You can now follow @novenakosagi for updates on the project. I’d also like to reiterate here that the photos shown in the article are just an early prototype, and the final forms of the machine are going to be different — quite different — from what’s shown.

Below is a copy of the article text for your convenient reading. And, as a reminder, specs and source files can be downloaded at our wiki.

Building an Open Source Laptop

About a year and a half ago, I engaged on an admittedly quixotic project to build my own laptop. By I, I mean we, namely Sean “xobs” Cross and me, bunnie. Building your own laptop makes about as much sense as retrofitting a Honda Civic with a 1000hp motor, but the lack of practicality never stopped the latter activity, nor ours.

My primary goal in building a laptop was to build something I would use every day. I had previously spent several years at chumby building hardware platforms that I’m ashamed to admit I rarely used. My parents and siblings loved those little boxes, but they weren’t powerful enough for a geek like me. I try to allocate my discretionary funds towards things based on how often I use them. Hence, I have a nice bed, as I spend a third of my life in it. The other two thirds of my life is spent tapping at a laptop (I refuse to downgrade to a phone or tablet as my primary platform), and so when picking a thing to build that I can use every day, a laptop is a good candidate.

SONY DSC I’m always behind a keyboard!

The project was also motivated by my desire to learn all things hardware. Before this project, I had never designed with Gigabit Ethernet (RGMII), SATA, PCI-express, DDR3, gas gauges, eDP, or even a power converter capable of handling 35 watts – my typical power envelope is under 10 watts, so I was always able to get away with converters that had integrated switches. Building my own laptop would be a great way for me to stretch my legs a bit without the cost and schedule constraints normally associated with commercial projects.

The final bit of motivation is my passion for Open hardware. I’m a big fan of opening up the blueprints for the hardware you run – if you can’t Hack it, you don’t Own it.

Back when I started the project, it was me and a few hard core Open ecosystem enthusiasts pushing this point, but Edward Snowden changed the world with revelations that the NSA has in fact taken advantage of the black-box nature of the closed hardware ecosystem to implement spying measures (“good news, we weren’t crazy paranoids after all”).

Our Novena Project is of course still vulnerable to techniques such as silicon poisoning, but at least it pushes openness and disclosure down a layer, which is tangible progress in the right direction.

While these heady principles are great for motivating the journey, actual execution needs a set of focused requirements. And so, the above principles boiled down to the following requirements for the design:

  • All the components should have a reasonably complete set of NDA-free documentation. This single requirement alone culled many choices. For example, Freescale is the only SoC vendor in this performance class where you can simply go to their website, click a link, and download a mostly complete 6,000-page programming manual. It’s a ballsy move on their part and I commend them for the effort.
  • Low cost is not an objective. I’m not looking to build a crippled platform based on some entry-level single-core SoC just so I can compete price-wise with the likes of Broadcom’s non-profit Raspberry Pi platform.
  • On the other hand, I can’t spec in unicorn hair, although I come close to that by making the outer case from genuine leather (I love that my laptop smells of leather when it runs). All the chips are ideally available off the shelf from distributors like Digi-Key and have at least a five year production lifetime.
  • Batteries are based off of cheap and commonly available packs used in RC hobby circles, enabling users to make the choice between battery pack size, runtime, and mass. This makes answering the question of “what’s the battery life” a bit hard to answer – it’s really up to you – although one planned scenario is the trans-Siberian railroad trek, which is a week-long trip with no power outlets.
  • The display should also be user-configurable. The US supply chain is weak when it comes to raw high-end LCD panels, and also to address the aforementioned trans-Siberian scenario, we’d need the ability to drive a low-power display like a Pixel Qi, but not make it a permanent choice. So, I designed the main board to work with a cheap LCD adapter board for maximum flexibility.
  • No binary blobs should be required to boot and operate the system for the scenarios I care about. This one is a bit tricky, as it heavily limits the wifi card selection, I don’t use the GPU, and I rely on software-only decoders for video. But overall, the bet paid off; the laptop is still very usable in a binary-blob free state. We prepared and gave a talk recently at 30C3 using only the laptops.
  • The physical design should be accessible – no need to remove a dozen screws just to pull off the keyboard. This design requires removing just two screws.
  • The design doesn’t have to be particularly thin or light; I’d be happy if it was on par with the 3cm-thick Thinkpads or Inspirons I would use back in the mid 2000′s.
  • The machine must be useful as a hardware hacking platform. This drives the rather unique inclusion of an FPGA into the mainboard.
  • The machine must be useful as a security hacking platform. This drives the other unusual inclusion of two Ethernet interfaces, a USB OTG port, and the addition of 256 MiB DDR3 RAM and a high-speed expansion connector off of the FPGA.
  • The machine must be able to build its own firmware from source. This drives certain minimum performance specs and mandates the inclusion of a SATA interface for running off of an SSD.

After over a year and a half of hard work, I’m happy to say our machines are in a usable form. The motherboards are very reliable, the display is a 13” 2560×1700 (239ppi) LED-backlit panel, and the cases have an endoskeleton made of 5052 and 7075 aluminum alloys, an exterior wrapping of genuine leather, an interior laminate of paper (I also love books and papercraft), and cosmetic panels 3D printed on a Form 1. The design is no Thinkpad Carbon X1, but they’ve held together through a couple of rough international trips, and we use our machines almost every day.

Laptop parked in front of the Form1 3D printer used to make its body panels.

I was surprised to find the laptop was well-received by hackers, given its homebrew appearance, relatively meager specs and high price. The positive response has encouraged us to plan a crowd funding campaign around a substantially simplified (think “all in one PC” with a battery) case design. We think it may be reasonable to kick off the campaign shortly after Chinese New Year, maybe late February or March. Follow @novenakosagi for updates on our progress!

The first two prototypes are wrapped in red sheepskin leather, and green pig suede leather.

Detail view of the business half of the laptop.

by bunnie at January 09, 2014 03:23 AM

January 08, 2014

Video Circuits

Early Lighting Effects at The BBC

Via youtuber Marc Campbell

This is a video of some beautiful lighing effects work at the BBC 

more on the history of lighting effects coming very soon

by Chris (noreply@blogger.com) at January 08, 2014 09:29 AM

Mars an Optic Aspic

From youtuber electromedia
 "Bill Etra's real-time live performance from The Kitchen in 1968 was recorded on Color 16mm film by Woody Vasulka. Woody had that film transferred to DV in 2003, and sent one to Bill. Eventually, Bill gave me a copy of the DV with the hope that I could restore MARS to something similar to its original incarnation. Mars an Optic Aspic was originally performed on 9 B&W monitors, but the 16mm film added some unexpected and welcome color effects that lend themselves to the composition. The choice was made to leave them in. So, this is my restoration (and color correction), with no change to the original sound, and for the first time since the original performance we can see the work in a 9-way split."

by Chris (noreply@blogger.com) at January 08, 2014 09:19 AM

Mandalamat

Here is the very cool Mandalamat an analogue computer specifically built by Christian Günther to create interesting patterns either using an XY plotter or an Oscilloscope, I have had an XY plotter for a while and made a few drawings using modular synthesizer and function generators but nothing this beautiful! 




















by Chris (noreply@blogger.com) at January 08, 2014 03:43 AM

January 07, 2014

Bunnie Studios

Name that Ware January 2014

The ware for January is shown below.

Thanks to my buddy Mike Fitzmorris for contributing yet another wonderful ware to this competition.

by bunnie at January 07, 2014 08:13 AM

Winner, Name that Ware December 2013

The Ware for last December 2013 is the beloved 555 timer, specifically the TLC555 by TI, fabricated in their “LinCMOS” process. I was very excited when T. Holman allowed me to post this as a ware, partially because I love die shots, and partially because the 555 is such a noteworthy chip and up until now I had never seen the inside of one.

The huge transistor on the top left is the discharge N-FET, and the distinctive, round FETs are I believe what makes LinCMOS special. These are tailor-designed transistors for good matching across process conditions, and they form the front ends of the two differential comparators that are the backbone of the threshold/trigger circuit inside the 555.

Picking a winner this time was tough — many close guesses. I’m going to name “eric” the winner, as he not only properly identified the chip, but also gave extensive analysis in his answer as well. DavidG had a correct guess very early on, but no explanation. Jonathan had the right answer, but the divider underneath the big transistor wasn’t done with resistors, it’s done with FETs, and steveM2 also was very, very close but called it a TLC551Y. So congrats to eric, email me to claim your prize.

by bunnie at January 07, 2014 08:12 AM

December 31, 2013

Free Electrons

New training materials: boot time reduction workshop

We are happy to release new training materials that we have developed in 2013 with funding from Atmel Corporation.

The materials correspond to a 1-day embedded Linux boot time reduction workshop. In addition to boot time reduction theory, consolidating some of our experience from our embedded Linux boot time reduction projects, the workshop allows participants to practice with the most common techniques. This is done on SAMA5D3x Evaluation Kits from Atmel.

The system to optimize is a video demo from Atmel. We reduce the time to start a GStreamer based video player. During the practical labs, you will practice with techniques to:

  • Measure the various steps of the boot process
  • Analyze time spent starting system services, using bootchartd
  • Simplify your init scripts
  • Trace application startup with strace
  • Find kernel functions taking the most time during the boot process
  • Reduce kernel size and boot time
  • Replace U-Boot by the Barebox bootloader, and save a lot of time
    thanks to the activation of the data cache.

Creative commonsAs usual, our training materials are available under the terms of the Creative Commons Attribution-ShareAlike 3.0 license. This essentially means that you are free to download, distribute and even modify them, provided you mention us as the original authors and that you share these documents under the same conditions.

Special thanks to Atmel for allowing us to share these new materials under this license!

Here are the documents at last:

The first public session of this workshop will be announced in the next weeks.
Don’t hesitate to contact us if you are interested in organizing a session on your site.

by Michael Opdenacker at December 31, 2013 08:33 PM

December 29, 2013

Michele's GNSS blog

Wrapping up 2013 GNSS facts

Well, it has indeed been quite a long time since I wrote here the last time.
My personal situation has changed in such a way that it is hard to find the time to share my views and to comment your feedback on current advances in the GNSS R&D domain.
However I feel like dropping here an "end of the year" summary which serves more as a memorandum to me that anything else.
I might be challenging this article
http://gpsworld.com/2013-a-positive-year-for-location-industry/
but 2013 has not been a very good year for GNSS.
At least someone generally agrees:
http://qz.com/161443/2013-was-a-lost-year-for-tech/#!

Mass market receivers for low cost RTK
Lately there was one more item on the acquisitions list:
The above shows how aggressive this market is and leaves just a few players on the field.
Some of them, such as Intel, Qualcomm and Broadcom don't really sell to private and just target device manufacturers. Others have shelves as well and those are the most interesting for people thinkering with GNSS of course.
In no particular order:

Mediatek
Early in 2013 Mediatek announced MTK3333. This powerful true dual constellation receiver can be found in many modules (for example by Locosys and GlobalTop. Although impressive, it does not seem to offer pseudoranges or carrier phase at the moment.

uBlox
IMHO, uBlox has a confusing roadmap with 6th generation modules overlapping with 7th generation and now apparently 8th generation. I design with uBlox modules and it is a little awkward to tell my customers that modules already advertised for are due for release in 6 months. The Company has done an aggressive marketing of a "PPP" technology that has nothing to do with Precise Point Positioning, but rather with carrier phase smoothing. uBlox still manages to sell navigation modules which are not true dual constellation but either one or the other constellation. Their "P" and "T" modules have well established pseudorange, carrier phase and more measurements for GPS.

ST-Microelectronics
STM has released in 2013 the TeseoII true dual constellation IC. The STA8088FG is 7x7 mm and is very capable. Pseudoranges can be obtained with some firmware, but never carrier phase. Carrier phase is rumored to be available in Teseo3, which is to be released next year. STM is more open to Galileo than any other Company, but it is weird to find their GNSS products under "Automotive Infotainment and Telematics" category.. not that STM website is an easy one to navigate anyway.

NVS
NVS continued to innovate in 2013 by releasing new FW for their NV08C-CSM and -MCM modules. However, after the press release of compatibility with the popular precision navigation software RTKLIB, they went quiet. By the way, I wrote the support driver for their receiver into RTKLIB but never received a mention.. you are welcome. Next year NVS will release a new hardware revision of their modules, but nothing is told about changes to expect against the current. I have high expectations :)

Geostar Navigation
This Company was pretty new to me and came as a surprise in 2013. It offers a true dual GPS+Glonass receiver module. In my opinion has still to improve in terms of reliability but the preconditions are all good (website is updated often so at least the Company seems well up and running). Rumors want Geostar-Navigation to release a pseudorange and carrier phase capable firmware (for GPS at least) very soon.

Furuno
Furuno came back in 2013 with a true dual constellation chipset called eRideOPUS 7 and module called GN-87F. The Company has expressed interest in releasing Galileo compliant firmware soon and the chip seems to be able to output pseudoranges, but not carrier phase.

CSR
CSR has finally delivered a new standalone receiver IC, the SiRFstarV 5e. I bought the Telit Jupiter SE868 V2 (quite a mouthful) evaluation kit from Rutronik. I still did not have a chance of testing it out in real environment but it does track simultaneously GPS, SBAS and Glonass. The chip seems to be very swift and surely has best in class power consumption, but SiRF already departed from raw measurements path with their 3rd generation so I would not expect them to be back on that track now.

Skytraq
Last but not least, Skytraq was among the first ones to release a true dual constellation chip and module with the intention of supporting raw measurements. I bought some S4554GNS-LP back in 2011 already. Since then the Company has made a great progress in integration and quality of modules. The latest generation, the Venus8, has GPS 50Hz measurements, or GPS+Glonass at 20Hz. As all new modules comply with the uBlox NEO format, I already had a chance of integrating some S1216F8, S1216F8-GL and S1216F8-BD. Whilst not tested in the field but "only" with an Agilent GNSS simulator, these modules represent for me the greatest promise for 2014.
All GNSS enthusiastics should check out the NavSpark:
http://igg.me/p/603168/x/5902022
Although I am not a crowdfunding enthusiastic (see later as well) the news is that it is possible to get at least an evaluation kit (with libraries) for this powerful baseband processor for USD 199. There is a lot of room to play for sure, and 50Hz GPS raw measurements for less than USD 20 a module will buzz for sure.

  GNSS Software Defined Radios

Looking out for interesting devices to use for GNSS SDR, this year has been a promising one. But not all promises are kept, as I will explain below.

Memoto Camera

One year ago already, following the hype of the Cell-guide snapshot GPS technology, I decided for the first time to back a kickstarters project: the Memoto camera. This "lifelogging device" has an Aclys chip inside which will only turn on for a few milliseconds every so often and record a GPS signal snapshot, in order to achieve the lowest possible battery life. After more than 1 year not only I did not received the camera but my contact has been lost in the transition from Memoto to Narrative. Being my support request unanswered, it is hard to know wheter I have lost my pledge or not... fingers crossed.

Jawbreaker -> HackRF One

Michael Ossmann, creator of the Ubertooth, started developing other interesting devices for low cost SDR. I missed by very little the Jawbreaker giveaway back in June, so I decided to support its Kickstarters campaign for HackRF One, which is essentially the same object but not free and with 8 months more on its shoulders. Whether this time has actually served to real innovation or just made Michael more popular (at least well deserved in his case) and rich.
My plans of using HackRF One for GNSS record and playback are a little pushed back by the fact that it is a half-duplex design, although I see some potential by properly hopping between TX and RX.If and when I receive the thing.

Nuand BladeRF

BladeRF was probably the greatest disappointment so far. It mounts
  • the Lime Microsystems LMS6002D, a fully programmable RF transceiver capable of full-duplex communication between 300MHz and 3.8GHz
  • an Altera Cyclone 4 with options at 40KLE or 115KLE
  • a Cypress FX3 microcontroller
Sounds like the perfect board to have a GNSS receiver on FPGA, and a real-time continuous-time record and playback device. I received the boards back in Summer and was never really able to have them working reliably. I installed the Ubuntu several times, until now 13.10 seems to have native support at least for the libusb version they link to. Things might change tomorrow, but it has been six months of bleeding so far. Essentially the software is not stable. BladeRF might even work as a 450 USD spectrum analyzer once you install a leviathan like Gnuradio. But it won't work for me if it misses packets once in a while, if it randomly switches the I and Q channel, if it cannot tune to 1.5GHz in TX mode, if it works only with Renesas and NEC USB 3.0 hosts.

SwiftNav Piksi

Two bright guys, Fergus Noble and Colin Beighley founded Swiftnav and started developing Piksi, an "open source" GPS receiver for RTK implemented as a combination of a Spartan6 9KLE FPGA and STM32F4 168 MHz Cortex-M4 MCU. Swiftnav "kickstarted" Piksi back in the Summer, when I already had a sample of it. Unfortunately once the campaign was funded (and I was one of the backers of course) there has been little development on the software side. Sure, more boards were manufactured to address sales but Swiftnav customers are not yet able to see how the RTK engine will look like, nor they have visibility of the FPGA correlators code.
Actually those two are two valuable pieces of software so I cannot hide my sceptictism in believing the promised Open Source nature of the venture.
Before I publish here anything about Piksi, I need to be provided a simple way of
  1. recording data with the board connected to a perfect antenna. 
  2. converting the recorded stream into Rinex OBS
IMHO, those two are the fundamentals when a Company plans to sell a RTK capable receiver. It does not have to be small, to be low power, to have an embedded antenna if carrier phase isn't rock solid to begin with.
Very recently Swiftnav published a video (filmed in August, so why now?) where they show differential accuracy on the rooftop of their office building.. but that seems a very poor reward for three months of Github silence (1) (2).
I sent Fergus and Colin two pieces of the most recent Rap10LogWi release


asking them to try their RTK engine (I used their same MCU) on a bullet-proof low cost GPS receiver as the uBlox NEO6T.
I have not received any feedback so far.. I know they are busy investing their reward, but I think their customers (and I) need a bit of delivery of credibility at this point.

Ettus USRP B2x0

Probably the savior of my 2013 SDR hopes, the B2x0 boards from Ettus tick many boxes. They are a little expensive maybe, but they are supported by the UHD driver which has a huge community behind. How did Ettus manage to pull off a deal with Analog Devices and integrate first the super powerful AD9361 I don't know. The TI AFE7070 came close sometime this year in terms of chip integration, and the above mentioned LMS6002D (with its awkward footprint) even closer, but that ADI chip seems unbeatable right now. The Spartan 6 75KLE seems also large enough to run several channels of a GNSS receiver already. I will have access to a couple of these boards very soon and I cannot wait.

...TBC








by noreply@blogger.com (Michele Bavaro) at December 29, 2013 10:43 PM

Bunnie Studios

On Hacking MicroSD Cards

Today at the Chaos Computer Congress (30C3), xobs and I disclosed a finding that some SD cards contain vulnerabilities that allow arbitrary code execution — on the memory card itself. On the dark side, code execution on the memory card enables a class of MITM (man-in-the-middle) attacks, where the card seems to be behaving one way, but in fact it does something else. On the light side, it also enables the possibility for hardware enthusiasts to gain access to a very cheap and ubiquitous source of microcontrollers.

In order to explain the hack, it’s necessary to understand the structure of an SD card. The information here applies to the whole family of “managed flash” devices, including microSD, SD, MMC as well as the eMMC and iNAND devices typically soldered onto the mainboards of smartphones and used to store the OS and other private user data. We also note that similar classes of vulnerabilities exist in related devices, such as USB flash drives and SSDs.

Flash memory is really cheap. So cheap, in fact, that it’s too good to be true. In reality, all flash memory is riddled with defects — without exception. The illusion of a contiguous, reliable storage media is crafted through sophisticated error correction and bad block management functions. This is the result of a constant arms race between the engineers and mother nature; with every fabrication process shrink, memory becomes cheaper but more unreliable. Likewise, with every generation, the engineers come up with more sophisticated and complicated algorithms to compensate for mother nature’s propensity for entropy and randomness at the atomic scale.

These algorithms are too complicated and too device-specific to be run at the application or OS level, and so it turns out that every flash memory disk ships with a reasonably powerful microcontroller to run a custom set of disk abstraction algorithms. Even the diminutive microSD card contains not one, but at least two chips — a controller, and at least one flash chip (high density cards will stack multiple flash die). You can see some die shots of the inside of microSD cards at a microSD teardown I did a couple years ago.

In our experience, the quality of the flash chip(s) integrated into memory cards varies widely. It can be anything from high-grade factory-new silicon to material with over 80% bad sectors. Those concerned about e-waste may (or may not) be pleased to know that it’s also common for vendors to use recycled flash chips salvaged from discarded parts. Larger vendors will tend to offer more consistent quality, but even the largest players staunchly reserve the right to mix and match flash chips with different controllers, yet sell the assembly as the same part number — a nightmare if you’re dealing with implementation-specific bugs.

The embedded microcontroller is typically a heavily modified 8051 or ARM CPU. In modern implementations, the microcontroller will approach 100 MHz performance levels, and also have several hardware accelerators on-die. Amazingly, the cost of adding these controllers to the device is probably on the order of $0.15-$0.30, particularly for companies that can fab both the flash memory and the controllers within the same business unit. It’s probably cheaper to add these microcontrollers than to thoroughly test and characterize each flash memory chip, which explains why managed flash devices can be cheaper per bit than raw flash chips, despite the inclusion of a microcontroller.

The downside of all this complexity is that there can be bugs in the hardware abstraction layer, especially since every flash implementation has unique algorithmic requirements, leading to an explosion in the number of hardware abstraction layers that a microcontroller has to potentially handle. The inevitable firmware bugs are now a reality of the flash memory business, and as a result it’s not feasible, particularly for third party controllers, to indelibly burn a static body of code into on-chip ROM.

The crux is that a firmware loading and update mechanism is virtually mandatory, especially for third-party controllers. End users are rarely exposed to this process, since it all happens in the factory, but this doesn’t make the mechanism any less real. In my explorations of the electronics markets in China, I’ve seen shop keepers burning firmware on cards that “expand” the capacity of the card — in other words, they load a firmware that reports the capacity of a card is much larger than the actual available storage. The fact that this is possible at the point of sale means that most likely, the update mechanism is not secured.

In our talk at 30C3, we report our findings exploring a particular microcontroller brand, namely, Appotech and its AX211 and AX215 offerings. We discover a simple “knock” sequence transmitted over manufacturer-reserved commands (namely, CMD63 followed by ‘A’,'P’,'P’,'O’) that drop the controller into a firmware loading mode. At this point, the card will accept the next 512 bytes and run it as code.

From this beachhead, we were able to reverse engineer (via a combination of code analysis and fuzzing) most of the 8051′s function specific registers, enabling us to develop novel applications for the controller, without any access to the manufacturer’s proprietary documentation. Most of this work was done using our open source hardware platform, Novena, and a set of custom flex circuit adapter cards (which, tangentially, lead toward the development of flexible circuit stickers aka chibitronics).

Significantly, the SD command processing is done via a set of interrupt-driven call backs processed by the microcontroller. These callbacks are an ideal location to implement an MITM attack.

It’s as of yet unclear how many other manufacturers leave their firmware updating sequences unsecured. Appotech is a relatively minor player in the SD controller world; there’s a handful of companies that you’ve probably never heard of that produce SD controllers, including Alcor Micro, Skymedi, Phison, SMI, and of course Sandisk and Samsung. Each of them would have different mechanisms and methods for loading and updating their firmwares. However, it’s been previously noted that at least one Samsung eMMC implementation using an ARM instruction set had a bug which required a firmware updater to be pushed to Android devices, indicating yet another potentially promising venue for further discovery.

From the security perspective, our findings indicate that even though memory cards look inert, they run a body of code that can be modified to perform a class of MITM attacks that could be difficult to detect; there is no standard protocol or method to inspect and attest to the contents of the code running on the memory card’s microcontroller. Those in high-risk, high-sensitivity situations should assume that a “secure-erase” of a card is insufficient to guarantee the complete erasure of sensitive data. Therefore, it’s recommended to dispose of memory cards through total physical destruction (e.g., grind it up with a mortar and pestle).

From the DIY and hacker perspective, our findings indicate a potentially interesting source of cheap and powerful microcontrollers for use in simple projects. An Arduino, with its 8-bit 16 MHz microcontroller, will set you back around $20. A microSD card with several gigabytes of memory and a microcontroller with several times the performance could be purchased for a fraction of the price. While SD cards are admittedly I/O-limited, some clever hacking of the microcontroller in an SD card could make for a very economical and compact data logging solution for I2C or SPI-based sensors.

Slides from our talk at 30C3 can be downloaded here, or you can watch the talk on Youtube below.

Team Kosagi would like to extend a special thanks to .mudge for enabling this research through the Cyber Fast Track program.

by bunnie at December 29, 2013 02:43 PM

ZeptoBARS

Invensense MPU6050 6-axis MEMS IMU : weekend die-shot

MEMS is probably the most requested thing we are asked about. Year ago we unsucesfully tried to take a photo of MPU6050. Now it's time for revenge!

Invensense MPU6050 is an integrated gyroscope and accelerometer with 16-bit readings. It contains 2 dies, soldered/welded face-to-face in multiple places (that's what was causing us troubles last time, required temperature for separation exceeded 600C).



On the overview photo you can see how not-flat they are. On a bigger die MEMS part is 28µm above surface, on smaller die - 100 µm above. Also, there is logic right under MEMS on the bigger die.


Size of big die is 2782x2718 µm, small die - 2778x2195 µm.


Small die, focus on top level. Width of smallest teethes is 1µm.
These teethes allow to sense their movement by change of capacitance between electrodes.


Small die, focus on bottom level:


Big die:


Below MEMS one can find conventional digital logic, ~250nm halfpitch

SRAM, cell area is 10.13 µm2:


Standard-cell-based logic:

December 29, 2013 09:06 AM

December 28, 2013

Video Circuits

Pipilotti Rist

How have I never posted any Pipilotti Rist before, maybe because she is so well known and the main focus of her work is a little divergent from the bulk of the work here, still cracking stuff !
http://www.youtube.com/user/AtelierRist/videos
















Can't embed but this one here is good
http://youtu.be/8DLuj-xMphQ

by Chris (noreply@blogger.com) at December 28, 2013 09:15 AM

December 23, 2013

ZeptoBARS

1645RT2U - Milandr radhard antifuse ROM : weekend die-shot

1645RT2U - radhard 32k*8 antifuse ROM, designed by Milandr. Die size - 8232x8973 µm.
Minimal observed half-pitch - 680nm.



After metalization etch:


Let's take a look at individual memory cells. Area of 1 cell is 91.8 µm2.

Antifuse ROM stores data by dielectric breakdown of thin oxide (17V is required in this case for reliable programming). On this photo you can see green "squares" - these are access transistors. Below/above them -in red rectangles - storage element itself. In the center of this rectangle there is an oval area, where dielectric is much thinner, and dielectric breakdown happens somewhere there.



SEM photo made using Hitachi TM3030:

December 23, 2013 04:27 AM

December 20, 2013

Peter Zotov, whitequark

Foundry has been cancelled

Two years ago on this day I started working on Foundry, and I developed some nice things, including prototypes of both the language and the compiler. Today I’m cancelling the project.

The reason is simple and technical. The idea behind Foundry was to take the convenience Ruby offers, and transplant it to a statically typed language. My chosen implementation path involved global type inference in every interesting aspect of it. While powerful, this technique makes writing closely-coupled, modular code hard, separate compilation impossible, and error messages become even more cryptic than those of C++.

Simply put, this is not a language I myself would use. Also, I could not find a way to get rid of global type inference which didn’t involve turning the language into a not invented here version of C#, Rust or what else.

Lessons? Don’t design a language unless you have a very good reason to. By all means, do design a language if your idea is fancy enough. And don’t use global type inference, it sucks.

Now go and check out Rust. It gets better every day.

December 20, 2013 09:44 PM

Free Electrons

Free Electrons at Linux Conf Australia, January 2014

Linux Conf Augstralia 2014Linux Conf Australia is by far the most well-known Linux related conference of the southern hemisphere, with a good number of Linux kernel related talks and discussions, as well as many other topics around the Linux ecosystem. The 2014 edition of the event will take place in Perth, Australia, and the schedule of talks and mini-confs looks very promising!

For the first time, Free Electrons will be participating to this conference: our CTO and embedded Linux engineer Thomas Petazzoni will be giving a talk titled Buildroot: building embedded Linux systems made easy!, during which he will be presenting what Buildroot is, what it is useful for, and how it works.

Beyond this talk, Thomas will be attending the full week of conferences, so do not hesitate to get in touch with him, especially if you’re interested in embedded Linux topics, Buildroot, ARM kernel development, and in Free Electrons!

by Thomas Petazzoni at December 20, 2013 08:44 AM

December 19, 2013

Altus Metrum

keithp&#x27;s rocket blog: Altos1.3

AltOS 1.3 — TeleMega and EasyMini support

Bdale and I are pleased to announce the release of AltOS version 1.3.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a major release of AltOS as it includes support for both of our brand new flight computers, TeleMega and EasyMini.

AltOS Firmware — New hardware, new features and fixes

Our new advanced flight computer, TeleMega, required a lot of new firmware features, including:

  • 9 DoF IMU (3 axis accelerometer, 3 axis gyroscope, 3 axis compass).

  • Orientation tracking using the gyroscopes (and quaternions, which are lots of fun!)

  • APRS support so your existing amateur radio receiver can track the location of your rocket.

  • Software FEC, both encoding and decoding.

  • Four fully-programmable pyro channels, in addition to the usual apogee and main channels.

  • STM32L CPU support. TeleMega needed a more powerful processor. The STM32L is a 32-bit ARM Cortex-M3 which is definitely up to the challenge.

Our new easy-to-use flight computer, EasyMini also uses a new processor, the LPC11U14, which is an ARM Cortex-M0 part.

For our existing cc1111 devices, there are some minor bug fixes for the flight software, so you should plan on re-flashing flight units at some point. However, there aren’t any incompatible changes, so you don’t have to do it all at once.

Bug fixes:

  • More USB fixes for Windows.

  • Turn off the cc1111 RC oscillator at startup. This may save a bit of power, and may reduce noise inside the chip a bit.

AltosUI — Redesigned for TeleMega and EasyMini support

AltosUI has also seen quite a bit of work for the 1.3 release, but almost all of that was a massive internal restructuring necessary to support flight computers with a wide range of sensors. From the user’s perspective, it’s pretty similar with a few changes:

  • Graphs can now show the raw barometric pressure

  • Support for TeleMega and EasyMini, including alternate TeleMega pyro channel configuration.

  • Bug fixes in how data were extracted from a flight record for graphing — sometimes values would end up getting plotted out of order, causing weird jaggy lines.

December 19, 2013 10:22 AM

December 17, 2013

Richard Hughes, ColorHug

Is PackageKit-hawkey now ready for primetime?

I’ve been using the hawkey backend on my Fedora 20 system for about 6 weeks now. In that time, I’ve found bugs in hawkey, librepo and even libsolv and I’d like to thank Michael, Tomas and Ales for all the help debugging and reviewing all the fixes. Of course, there were quite a few PackageKit bugs fixed too. So if you’re testing PackageKit-hawkey you really want to update to these packages:

Those updates are currently on their way to updates-testing, but will be in Fedora 20 in a few short days barring any last minute problems. I am now happy we can switch Fedora 21 to using hawkey by default, and reap the rewards of all the hard work put in by so many people over the last few months. I for one am really happy about the speed boost brought to all the applications using PackageKit.

On that note, happy Christmas everyone.

by hughsie at December 17, 2013 05:56 PM

Elphel

NC393 development progress – the initial software

The software used in the previous Elphel cameras was based on the GNU/Linux distribution supported By Axis Communications for their ETRAX processors. Of course it was heavily modified, we developed new code and ported many applications to run in the camera. Over the years we worked on making it easier to install, use and update, provided customized Live GNU/Linux distributions so those with zero experience with this operating system can still use the camera development software. Originally we used Knoppix-based CD, then DVD, then switched to Kubuntu when it became available and stable. And DVDs were eventually replaced by the USB flash drives.

Knoppix and Kubuntu are for the host computer, the cameras themselves used the same non-standard, mostly home-brewed distribution, that became more and more difficult to maintain especially when Axis abandoned their processors. So even during the first attempt to move to a new platform we really hoped to be able to use modern distribution for the embedded systems. And get rid of the nightmare of porting ourselves such applications as PHP and then doing mostly the same all over again when the new revisions became available. To be able to use the latest Linux kernel and not to spend time modifying the IDE driver myself to provide support for the large block hard drives when most manufacturers abandoned 512 byte ones – 2.6.19 kernel does not have it and there is not easy to use the later drivers.

Oleg is now working on adapting the OpenEmbedded distribution and the work flow for the new camera distribution, and while embracing the power of “bitbaking” we are trying to preserve the features we implemented in the NC353 camera software. And while the OpenEmbedded-based Yocto Project is for embedded system developers, we need the software for Elphel camera users – software that can be easily installed by a single script (at least on a particular GNU/Linux distribution) or come pre-installed on a flash media. It should work “out of the box” for the users with no prior GNU/Linux experience – most of the camera users have different OS on their computers. We would also like to keep what we believe has an important practical use – a feature behind our /*source is inside*/ logo on the cameras. Each camera keeps the source code of the modifications archived in the internal flash file system, so running the downloaded from the camera script by the user results in virtually identical binary image, even if some software in the camera was custom-modified from the official (supported through Elphel git repositories) distribution.

There is still a lot left in the OE that we do not fully understand, but we are trying to do it right from the very beginning, understanding how important it is from our experience of making some major re-organizing code for the previous products. And Oleg is doing a good progress, there is a wiki page and Git repositories: meta-elphel393, meta-ezynq that document our work on this.

I did not succumb to a temptation to start working on the FPGA code immediately – there are some new ideas we want to try as well as some left for a future major “revolution” when updating the existing cameras FPGA code for the new sensors and applications. Anyway – we are not under pressure to demonstrate images from the new camera and we are confident that this job will be done in the expected time and will have the NC393 operational by the second half of the 2014. And the time is working for us – there are many people working now with Xilinx Zynq, and the active development weeds out bugs at a high rate. Failing to upgrade to the latest version already took a whole week of development time – the bug in the Xilinx Ethernet driver turned out to be already fixed.

While Oleg was immersing himself into the OpenEmbedded I was looking into the kernel driver development, what changed since the 2.6.19 era I dealt with when working on the previous camera software. There turned out to be quite a few changes and I decided to learn the new features working on a simpler drivers that we needed for the new boards. First of all I was pleased to find out that of the 7 of the I²C chips used on the 10393+10389 boards 3 were supported by the available kernel drivers – had just to specify them in the Device Tree and the supercap-powered real time clock was immediately recognized by the system – so did the temperature sensor/EEPOM and GPIO ports. Of the remaining ones with no available drivers the most challenging turned out to be SI5338 (clock generator) and I tried to add support for this device, using sysfs to control it, Device Tree (DT) to initialize it, and dynamic debug to facilitate development – none of these interfaces were used in the previous cameras.

The SI5338 had all the needed documentation available on the manufacturer’s web site, ready for download. But the device itself turned out not to be to so easy to control, and the recommended procedure included generation of the register map with the ClockBuilder software (for MS Windows), then loading the data to the device registers and initializing it with rather simple code, for which Silicon Labs provides the source. That did not seem very convenient so I tried to implement the driver that can be controlled at run time directly, calculating the particular register values from the high-level data on the fly. Most features are now supported in the si5338.c driver, it is also possible to load the register data generated by the ClockBuilder software (it is possible to run it with Wine) after converting the file with the Python script. It took me more time than I expected to develop this driver to the usable state, but I hope this work will be useful for others too. SI5338 is an excellent device that deserves better support in the Linux kernel. And having the driver working – it eliminates the last remaining obstacle to start working on the FPGA code. Or one of the last remaining – there are still a few minor ones left.

by andrey at December 17, 2013 07:41 AM

December 13, 2013

Video Circuits

Ron Hays Music-Image: Odyssey (1979)

"Ron Hays Music-Image: Odyssey" is a 1979 Laserdisc compilation of experimental music videos using the analog animation technique of Scanimation.

Thanks to Thebetawizard for the upload

by Chris (noreply@blogger.com) at December 13, 2013 03:57 AM

December 10, 2013

Moxie Processor

Putting it together: on-chip firmware

The on-chip firmware for the Marin SoC has been updated with the gdb stub, so now when you program the FPGA, you’ll see the following on the serial console:

MOXIE On-Chip Bootloader v2.0
Copyright (c) 2013 Anthony Green 

Waiting for an S-Record Download or Remote GDB Connection...

…and the Nexys3 7 segment display reads “FEEd”. At this point you can send down an srecord encoded binary that will then start running at 0×30000000 (7 segment display reads “3000″), or connect with moxie-elf-gdb (7 segment display reads “dEb2″). A typical gdb session looks like this:

The final bit of the puzzle was a missing feature in the on-chip RAM controller — not external RAM, but RAM cobbled together from FPGA logic which is used by the on-chip firmware for stack & heap. I had left out byte-level access in my initial design, so every read/write was 16-bits – potentially wiping out memory unintentionally. Once I figure this out, everything started to work.

I’m done with the on-chip bootloading firmware for now!

by green at December 10, 2013 11:45 AM

December 09, 2013

Bunnie Studios

Name that Ware December 2013

The Ware for December 2013 is shown below.

It’s been a while since I’ve had a proper die shot for a name that ware. Thanks to T. Holman for sharing this ware with us!

by bunnie at December 09, 2013 12:49 PM

Winner, Name that Ware November 2013

The Ware for November 2013 is anisotropic conductive tape, aka “Z-tape”. When I first heard about Z-tape, I scratched my head and wondered how that could work; but after lighting it up under a microscope, I instantly had a clear intuition for how it works, and its limitations.

The winner for last month’s competition is Scott Roberts, who for the record entered the correct guess prior to the answer being posted a couple days later in a follow-up post about circuit stickers. Congrats, email me for your prize!

by bunnie at December 09, 2013 12:49 PM

December 08, 2013

FreakLabs

Wrecking Crew Orchestra - Cosmic Beat Behind the Scenes

Last week, Wrecking Crew Orchestra wrapped up their Cosmic Beat show which I helped out with. There were six performances in total, three in Osaka and three in Tokyo and it was a blast working on it with them. They recently published the opening set from the show which featured Wrecking Crew Orchestra, EL Squad. This was the group that made a big splash with "Tron Dance" in 2012. (https://www.youtube.com/watch?v=6ydeY0tTtF4) The Cosmic Beat show used quite a bit...

December 08, 2013 04:54 PM

December 02, 2013

Richard Hughes, ColorHug

PackageKit on speed

I spent a few days last week optimising PackageKit. I first added a couple of huge 350ms+ optimisations when using Hawkey.  Then I turned my attention to the daemon itself and after adding a lot of profiling hooks to packagekitd, I recoiled in horror the amount of time it took to do simple things that everyone assumed would be fast.

A lot of unused functionality that was hurting transaction start times was removed. Certain core string functions were made fractions of ms faster and transactions a few hundreds of ms quicker in a few places, etc. The final result is that everything feels rather much speedier. Time-critical features like command-not-found and search-as-you-type now actually feel useful.

$ time pkcon search name powertop &> /dev/null
real0m0.082s

If you want to try out the new hotness, install the Fedora 20 update, enable the new hawkey backend and make sure you give karma. There’s also no more Zif backend in PackageKit, as hawkey is now faster and more reliable for all operations.

by hughsie at December 02, 2013 04:38 PM

November 29, 2013

Andrew Zonenberg, Silicon Exposed

Bug hunting

This is the story of the hunt for a bug that I've been chasing, on and off, for the last month.

After my last post on the PDU, I began doing more exhaustive testing. I left Munin polling all of the stats every 5 minutes and kept the GUI open for maybe half an hour, turning channels on and off, and everything seemed fine...

Then I went away to do something else, came back to my desk, and saw that the GUI had frozen. The board wasn't responding to ping (or any network activity at all), and I had no idea why.

Other than confirming that the interconnect fabric was still working (by resolving the addresses of a few cores by the name server) there wasn't a ton I could do without adding some diagnostic code.

I then resynthesized the FPGA netlist with the gdb bridge enabled on the CPU. (The FPGA is packed pretty full; I don't leave the bridge in all the time because it substantially increases place-and-route runtime and sometimes requires decreasing the maximum clock rate). After waiting around half an hour for the build to complete I reloaded the FPGA with the new code, fired up the GUI, and went off to tidy up the lab.

After checking in a couple of times and not seeing a hang, I finally got it to crash a couple hours later. A quick inspection in gdb suggested that the CPU was executing instruction normally, had not segfaulted, and there was no sign of trouble. In each case the program counter was somewhere in RecvRPCMessage(), as would be expected when the message loop was otherwise idle. So what was the problem?

The next step was to remove the gdb bridge and insert a logic analyzer core. (As mentioned above the FPGA is filled to capacity and it's not possible to use both at the same time without removing application logic.)

After another multi-hour build-and-wait-for-hang cycle, I managed to figure out that the CPU was popping the inbound-message FIFO correctly and seemed to be still executing instructions. None of the error flags were set.

I thought for a while and decided to check the free-memory-page counter in the allocator. A few hours later, I saw that the free-page count was zero... a telltale sign of a memory leak.

I wasted untold hours and many rebuild cycles trying to find the source of the leak before sniffing the RPC link between the CPU and the network. As soon as I saw packets arriving and not being sent, I knew that the leak wasn't the problem. It was just another symptom. The CPU was getting stuck somewhere and never processing new Ethernet frames; as soon as enough frames had arrived to fill all memory then all processing halted.

Unfortunately, at this point I still had no idea what was causing the bug. I could reliably trigger the logic analyzer after the bug had happened and the CPU was busy-waiting (by triggering when free_page_count hit 1 or 0) but had no way to tell what led up to it.

RPC packet captures taken after the fault condition showed that the new-frame messages from the Ethernet MAC were arriving to the CPU just fine. The CPU could be clearly seen popping them from the hardware FIFO and storing them in memory immediately.

Eventually, I figured out just what looked funny about the RPC captures: the CPU was receiving RPC messages, issuing memory reads and writes, but never sending any RPC messages whatsoever. This started to give me a hint as to what was happening.

I took a closer look at the execution traces and found that the CPU was sitting in a RecvRPCMessage() call until a message showed up, then PushInterrupt()ing the message and returning to the start of the loop.

/**
@brief Performs a function call through the RPC network.

@param addr Address of target node
@param callnum The RPC function to call
@param d0 First argument (only low 21 bits valid)
@param d1 Second argument
@param d2 Third argument
@param retval Return value of the function

@return zero on success, -1 on failure
*/
int __attribute__ ((section (".romlibs"))) RPCFunctionCall(
unsigned int addr,
unsigned int callnum,
unsigned int d0,
unsigned int d1,
unsigned int d2,
RPCMessage_t* retval)
{
//Send the query
RPCMessage_t msg;
msg.from = 0;
msg.to = addr;
msg.type = RPC_TYPE_CALL;
msg.callnum = callnum;
msg.data[0] = d0;
msg.data[1] = d1;
msg.data[2] = d2;
SendRPCMessage(&msg);

//Wait for a response
while(1)
{
//Get the message
RecvRPCMessage(retval);

//Ignore anything not from the host of interest; save for future processing
if(retval->from != addr)
{
//TODO: Support saving function calls / returns
//TODO: Support out-of-order function call/return structures
if(retval->type == RPC_TYPE_INTERRUPT)
PushInterrupt(retval);
continue;
}

//See what it is
switch(retval->type)
{
//Send it again
case RPC_TYPE_RETURN_RETRY:
SendRPCMessage(&msg);
break;

//Fail
case RPC_TYPE_RETURN_FAIL:
return -1;

//Success, we're done
case RPC_TYPE_RETURN_SUCCESS:
return 0;

//We're not ready for interrupts, save them
case RPC_TYPE_INTERRUPT:
PushInterrupt(retval);
break;

default:
break;
}

}
}

I spent most of a day repeatedly running the board until it hung to collect a sampling of different failures. A pattern started to emerge: addr was always 0x8003, the peripheral controller. This module contains a couple of peripherals that weren't big enough to justify the overhead of a full RPC router port on their own:
  • One ten-signal bidirectional GPIO port (debug/status LEDs plus a few reserved for future expansion)
  • One 32-bit timer with interrupt on overflow (used for polling environmental sensors for fault conditions, as well as socket timeouts)
  • One I2C master port (for talking to the DACs)
  • Three SPI master ports (for talking to the ADCs)
The two most common values for callnum in the hang state were PERIPH_SPI_SEND_BYTE and PERIPH_SPI_RECV_BYTE, but I saw a PERIPH_SPI_DEASSERT_CS call once. The GPIO and I2C peripherals aren't used during normal activity and are only touched when someone changes a breaker's trip point or the network link flaps, so I wasn't sure if the hang was SPI-specific or related to the peripheral controller in general.

After not seeing anything obviously amiss in the peripheral controller Verilog, I added one last bit of instrumentation: logging the last message successfully processed by the peripheral controller.

The next time the board froze, the CPU was in the middle of the first PERIPH_SPI_RECV_BYTE call in the function below (reading one channel of a MCP3204 quad ADC) but the peripheral controller was idle and had most recently processed the PERIPH_SPI_SEND_BYTE call on the line before.

unsigned int ADCRead(unsigned char spi_channel, unsigned char adc_channel)
{
//Get the actual sensor reading
RPCMessage_t rmsg;
unsigned char opcode = 0x30;
opcode |= (adc_channel << 1);
opcode <<= 1;
RPCFunctionCall(g_periphAddr, PERIPH_SPI_ASSERT_CS, 0, spi_channel, 0, &rmsg);
RPCFunctionCall(g_periphAddr, PERIPH_SPI_SEND_BYTE, opcode, spi_channel, 0, &rmsg); //Three dummy bits first
//then request read of CH0
//(single ended)
RPCFunctionCall(g_periphAddr, PERIPH_SPI_RECV_BYTE, 0, spi_channel, 0, &rmsg); //Read first 8 data bits
unsigned int d0 = rmsg.data[0];
RPCFunctionCall(g_periphAddr, PERIPH_SPI_RECV_BYTE, 0, spi_channel, 0, &rmsg); //Read next 4 data bits
//followed by 4 garbage bits
unsigned int d1 = rmsg.data[0];
RPCFunctionCall(g_periphAddr, PERIPH_SPI_DEASSERT_CS, 0, spi_channel, 0, &rmsg);

return ((d0 << 4) & 0xFF0) | ( (d1 >> 4) & 0xF);
}

Operating under the assumption that my well-tested interconnect IP didn't have a bug that could make it drop packets randomly, the only remaining explanation was that the peripheral controller was occasionally ignoring an incoming RPC.

I took another look at the code and found the bug near the end of the main state machine:

//Wait for RPC transmits to finish
STATE_RPC_TXHOLD: begin
if(rpc_fab_tx_done) begin
rpc_fab_rx_done <= 1;
state <= STATE_IDLE;
end
end //end STATE_RPC_TXHOLD

I was setting the "done" flag to pop the receive buffer every time I finished sending a message... without checking that I was sending it in response to another message. The only time this was ever untrue is when sending a timer overflow interrupt.

As a result, if a new message arrived at the peripheral controller between the start and end of sending the timer overflow message, it would be dropped. The window for doing this is only four clock cycles every 50ms, which explains the extreme rarity of the hang.

EDIT: Just out of curiosity I ran a few numbers to calculate the probability of a hang:
  • At the 30 MHz CPU speed I was using for testing, the odds of any single RPC transaction hanging are 1 in 375,000.
  • Reading each of the 12 ADC channels requires 5 SPI transactions, or 60 in total. The odds of at least one of these triggering a hang is 1 in 6250.
  • The client GUI polls at 4 Hz.
  • The chance of a hang occurring within the first 15 minutes of runtime is 43%.
  • The chance of a hang occurring within the first half hour is 68%.
  • There is about a 10% chance that the board will run for over an hour without hanging, and yet the bug is still there.

by Andrew Zonenberg (noreply@blogger.com) at November 29, 2013 09:21 PM

Altus Metrum

keithp&#x27;s rocket blog: Black Friday 2013

Back on Black (Friday) Event

Altus Metrum is pleased to announce our “Back on Black (Friday)” event!

For the first time since the Black Forest fire in June, we’re re-opening our web store this weekend with a host of new and classic Altus Metrum products, including a special pre-order discount on our latest-and-greatest flight computer design, TeleMega.

This weekend only, Friday, 29 November 2013 through Monday, 2 December, 2013, the first 40 TeleMega direct orders placed through our web store will receive a special $50 pre-order discount (regular $400, now only $350!).

TeleMega is an advanced flight computer with 9-axis IMU, 6 pyro channels, uBlox Max 7Q GPS and 40mW telemetry system. We designed TeleMega to be the ideal flight computer for sustainers and other complex projects. TeleMega production is currently in process, and we expect to be ready to ship in mid-December. Pre-order now and we won’t charge you until we ship. Learn more about TeleMega at:

http://altusmetrum.org/TeleMega/

We are also pleased to announce that TeleBT is back in stock. Priced at $150, TeleBT is our latest ground station that connects to your laptop over USB or your Android device over BlueTooth. Learn more about TeleBT at

http://altusmetrum.org/TeleBT/

Another new product we’re thrilled to announce is EasyMini! Priced at only $80, EasyMini is a two-channel flight computer with built-in data logging and USB data download.

Like our more advanced flight computers, EasyMini is loaded with sophisticated electronics and firmware, designed to be very simple to use yet capable enough for high performance airframes. Perfect as a first flight computer, EasyMini is also great as a backup deployment controller in complext projects. Learn more about EasyMini at:

http://altusmetrum.org/EasyMini/

Also in stock for immediate shipment is MicroPeak, our 1.9 gram recording altimeter available for $50. The MicroPeak USB adapter, also $50, has been improved to make data downloading a snap. Read more about these at:

http://altusmetrum.org/MicroPeak

http://altusmetrum.org/MicroPeakUSB

You can learn more about these and all our other Altus Metrum products at http://altusmetrum.org. The special discount on TeleMega pre-orders is available only on orders placed directly through Bdale’s web store at

http://shop.gag.com

Thank you all for your support of Altus Metrum during 2013. It’s been a rough year, but we’re having a great time updating our existing products and designing new stuff! We look forward to returning products like TeleMetrum and TeleMini to the market soon, and plan to introduce even more new products soon.

November 29, 2013 06:02 AM

Video Circuits

Aldo Tambellini Cathodic Works - 1966-76

"This double DVD release presents for the first time a selection of the cathodic experimental works from the seminal Italo-american artist Aldo Tambellini, a selection of classic documents of one of the first pioneers of video art and audiovisual experiment.
This double DVD release presents for the first time a selection of the cathodic experimental works from the seminal Italo-american artist Aldo Tambellini, a selection of classic documents of one of the first pioneers of video art and audiovisual experimentation from New York east side scene of the 60s and 70s. Unreleased and classic works available for the first time. This is the first release of the Classic series by Von Archives."
"Aldo Tambellini
Cathodic Works 1966-1976
Von ‎-- VON 014 DVD
DVD A
- Black Video 1 (1966, ½", b&w, sound, 31')
- Black video 2 (1966, ½", b&w, sound, 28')
- Black Spiral (1969, 16mm reversal, b&w, static sound, 6')
- Black Video 1 projections (1966, ½", b&w, sound, 18')
- Interview at the Black Gate Theatre (1967, ½", b&w, sound, 2')
DVD B
- Minus One (1969, 2" on ½", b&w, sound, 21')
- 6673 (1973, ½", color, sound, 32')
- Clone (1976, ½", b&w, sound, 40')
all materialis a transfer
from the original tapes,
non edited and
non manipulated
courtesy of Aldo Tambellini Archive
allrights reserved
mastering/design by Von archives
curated by Pla bolognesi / Giulio bursi
2 × DVD
Italy
Released: 2012
edition of 1000
VCS/VonClassic series
VON 2012"

by Chris (noreply@blogger.com) at November 29, 2013 05:04 AM

Moxie Processor

A Really Tiny GDB Remote Protocol Stub

I recently trimmed the Marin SoC’s on-chip memory down to 4k. The existing firmware for downloading srecord programs into external RAM for execution was taking up about 2k. With 2k to spare, I was wondering if you could fit a GDB remote protocol stub in there as well. It turns out that you can! Here is the code for tinystub.c: https://raw.github.com/atgreen/moxie-cores/master/firmware/tinystub.c.

With this stub you can load programs into the target device, examine memory, run programs and even set breakpoints (I had to finally implement BRK in the moxielite core for this).

A full tinystub executable (with startup code, etc), is about 2200 bytes of moxie code. This means I can easily merge it with the existing firmware, allowing people to either download an srecord program or connect to the device with moxie-elf-gdb. I believe everything is in place now to support running the GCC testsuite on hardware (dejagnu will use gdb to download an execute test programs).

by green at November 29, 2013 02:44 AM

November 28, 2013

Bunnie Studios

haxlr8r Map of Shenzhen Electronics Market

I’m fond of trawling the electronic markets of Shenzhen. It’s a huge area, several city blocks; it is overwhelming in scale. My friends at haxlr8r have published a guide to the markets, targeted at helping intrepid hacker-engineers use the market more efficiently, without having to spend a couple of weeks just figuring out the basics.

This is the first guide I’ve seen that gives a floor-by-floor breakdown of the wares contained in each building. This is particularly handy as some buildings contain several specialties that are not reflected by the items you find on the ground floor. It’s also bi-lingual, which helps if you can’t speak the language and you need to point at something the locals can read. While the map is missing a couple of my favorite spots, overall it’s well done and took a lot of effort to compile.

If you’re into making electronics, this electronics market is a must-see destination. If you have an idea you’re itching to build, you might want to consider looking into haxlr8r. haxlr8r’s offices are right in the heart of the electronics district, and I’m a mentor for the program; so, it’s a great opportunity to learn the markets, build stuff, and hang out and have a few beers.

by bunnie at November 28, 2013 10:57 AM

November 25, 2013

Video Circuits

Tele Visions Project


great line-up! 

"Commemorate the end of TV with Tele Visions, a series of live and broadcast events interrogating the medium of television on its death bed.

Devised, curated and produced to mark the occasion of the analog switch off around Australia, Tele Visions introduces audiences to the works of more than 100 artists working in the televisual space.

You can tune in live via the Tele Visions website or catch it on the analog broadcast if you live nearby. Box Set will be running 24hrs a day until midnight this Wednesday the 27th, from then on the Tele Visions channel begins a five day long continuous broadcast of artworks made for TV. Box Set is one of five live works commissioned by Tele Visions. Other artists include Lara Thoms, WRONG SOLO, Joel Stern and Pia van Gelder.

The entire broadcast program is now online with more than 120 works from 100 artists scheduled up to the minute each day. You can peruse the program from the Tele Visions website or download the TV Guide which also features short critical essays from Doug Anderson (SMH TV reviewer for 40 years), Sherry Miller Hocking (Experimental Televison Centre) and John Gillies (Video Artist and academic COFA). Pick up a limited edition printed (in glorious pink!) TV Guide at live events at both Carriageworks and Verge Gallery.
Tele Visions (28 November – 3 December 2013) is a project devised, directed and produced by Alex White and Emma Ramsay. Presented by Performance Space as part of the YOU’RE HISTORY, a program celebrating 30 years of Performance Space."

by Chris (noreply@blogger.com) at November 25, 2013 11:38 PM

November 22, 2013

Richard Hughes, ColorHug

Testing the hawkey backend in Fedora 20

The grand plan is that Fedora is replacing yum with dnf in Fedora 21/22. For a few technical reasons PackageKit isn’t going to be using the python DNF layer, but instead using the main two libraries that DNF is build upon directly, namely hawkey (which in turn uses libsolv) and librepo.

I’ve been working with the hawkey and librepo developers on-and-off for a few months now, and we’ve now got a “hawkey” backend in PackageKit which I’ve been stress-testing every day for the last week or so. Today I released PackageKit 0.8.13 with all the fixes in the hawkey backend that make it, well, actually work correctly.

If you’d like to test out the backend, the procedure is pretty simple. Either wait for PackageKit-0.8.13-1.fc20 to hit updates-testing or manually download all the packages. Make sure you’ve updated to 0.8.13-1, and then install the PackageKit-hawkey subpackage and then remove the PackageKit-yum subpackage. If you don’t know how to do this you probably should stick to the tried and tested yum backend for now :)

Reboot, and then pkcon backend-details should tell you that you’re indeed running with the hawkey backend. The first transaction will take a little time as all the metadata will be downloaded and built into a .solv file, but after that it should be fine. From there, test offline updates, gnome-software and all the new stuff and file bugs with a way to reproduce and a backtrace if anything fails (and grab me on IRC if you can). Known issues is that installing and removing groups is not implemented, but that should only affect the old gpk-application application.

And the most important question… Is hawkey faster than yum? I’ll have to let the early adopters be the judge of that. :)

by hughsie at November 22, 2013 04:42 PM

November 21, 2013

Richard Hughes, ColorHug

Offline Updates Performance Notes

So, after my epic 20+ minute offline update of 245 packages, last night I decided to look at some profiling numbers. All my testing was done using git master PackageKit (for the new strace support) on an otherwise unmodified Fedora 20 of a snapshot from last week. For the strace I chose to update two packages, otherwise the strace -tt output went maaaasive. Some salient points:

  • yum opens and closes the rpmdb 6692 times (that’s about 6690 more than it needs to) –we’re investigating why
  • fdatasync and fsync are killing us:

 

duration(ms) system call
805.749 fdatasync(17)
752.828 fsync(27)
658.659 fdatasync(9)
614.367 fdatasync(15)
598.182 fdatasync(33)
535.642 wait4(903, [{WIFEXITED(s) && WEXITSTATUS(s) =
423.247 wait4(911, [{WIFEXITED(s) && WEXITSTATUS(s) =
368.85 fsync(22)
309.556 stat(“/var/lib/yum/yumdb/g/gvfs-fuse-1.18.3-1.fc20-x86_64/checksum_type”
217.877 fdatasync(18)
179.002 close(23)

The full strace log is here (warning, huge) if you’re interested. I’ve got some other work to be doing today, but I’ll continue to work on this at the weekend.

by hughsie at November 21, 2013 09:50 AM

November 19, 2013

OggStreamer

#oggstreamer – NetIdee 2012 Report

For the last year the OggStreamer-Project was generously supported by the “NetIdee“- Program from the Internet Foundation Austria.

This Support allowed us to push the Software of this Project to “Release Candidate 1″-Status, Release a OggStreamer-SDK and also made it possible to produce a small-series of 54 Devices.

If you are doing OpenSource / PublicDomain / OpenSourceHardware Projects which are related to the Internet and Austria, the NetIdee Project might be a way to obtain some funding – (Their yearly Calls end usually in August)

Thanks a lot NetIdee!

The final report for the OggStreamer/NetIdee Project can be downloaded here (it is in German only).


by oggstreamer at November 19, 2013 11:15 AM

November 18, 2013

Bunnie Studios

Introducing chibitronics

Today, my collaborator Jie Qi and I launched a Crowd Supply campaign for circuit stickers.

Please visit the campaign page to see more photos of the stickers and to see how they can be used.

Here, I will write a bit about the background story, tech details, and manufacturing processes that went into making them.

Circuit stickers are peel-and-stick electronics for crafting circuits. In a nutshell, they are circuits on a flexible polyimide substrate with anisotropic tape (or “Z-tape” — so named because electricity only flows vertically through the tape, and not laterally) laminated on the back.

The use of Z-tape allows one to assemble circuits without the need for high-temperature processing (e.g. soldering or reflow), thereby enabling compatibility with heat-sensitive and/or pliable material substrates, such as paper, fabric, plastic, and so forth.

This enables electronics to be integrated in a range of non-traditional material systems with great aesthetic effect, as exemplified by the addition of circuit stickers to fabric and paper, shown below.

The Backstory
In today’s world of contract manufacturing and turnkey service providers, designers tend to pick from a palette of existing processes to develop products. Most consumer electronic devices are an amalgamation of rigid PCB, reflow/wave soldering, ABS/PC injection molding, sheet metal forming, and some finishing processes such as painting or electroplating. This palette is sufficient to cover the full range of utility required by most products, but I’ve noticed that really outstanding products also tend to introduce new materials or novel manufacturing processes.

I’ve a long-running hypothesis that new process development doesn’t have to be expensive, as long as you yourself are willing to go onto the factory floor and direct the improvements. In other words, the expensive bit is the wages of the experts developing the process, not the equipment or the materials.

I decided to start exploring flex circuits as a design medium, under the reasoning that although flex circuit technology is common place inside consumer products — there’s probably a half dozen examples of flex PCB inside your mobile phone — it’s underrepresented in hobby & DIY products. I had a hunch that the right kind of product designed in flex could enable new and creative applications, but I wasn’t quite sure how. One of my “training exercises” was a flex adapter for emulating a TSOP NAND FLASH chip, which I had written about previously on this blog.

The moment of serendipity came last January, when I was giving a group of MIT Media Lab students a tour of Shenzhen. Jie Qi, a PhD candidate at the Media Lab, showed me examples of her work combining electronics and papercraft.

Clearly, circuits on flex could be an interesting addition to this new media, but how? Building circuits on flex and then soldering the flex circuit onto paper would be an improvement, but it’s only an incremental improvement. We wanted a solution that would be compatible with low-melting point materials; furthermore, being solder-free meant that the stickers could be used in contexts where using a soldering iron is impractical or prohibited.

Jie introduced me to Z-tape, which is a great solution to the problem. November’s ware is a 20x magnification of Z-tape laminated to the back of a circuit sticker (someone has, of course, already correctly guessed the ware at the time of writing, so I can discuss the ware in more detail here without ruining the contest).

The stipples in the photo above (click on it for a larger version) are tiny metal particles that span from one side of the adhesive layer to the other. As you can see, the distribution is statistical in nature; therefore, in order to ensure good contact, a large pad area is needed. Furthermore, traces very close to each other can be shorted out by the embedded metal particles, so as I design the circuits I have to be careful to make sure I have enough space between exposed pads. The datasheet for the Z-tape material contains rules for the minimum pad size and spacing.

The problem is that there were no standard manufacturing processes that could produce circuit stickers as we envisioned them. Here at last was a meaningful opportunity to test the theory that new process development can be done on the cheap, as long as you are willing to do it yourself. And so, I started my own little research program to explore flexible circuit media, and the challenges of making circuit stickers out of them, all on Studio Kosagi’s shoe string R&D budget.

The first thing I did was to visit the facility where flex PCBs are manufactured. The visit was eye-opening.

Above is a worker manually aligning coverlay onto flex circuit material. Coverlay is a polyimide sheet used instead of soldermask on flex circuits. Soldermask is too brittle, and will crack; therefore, for reliability over thousands of flexing cycles, a coverlay is recommended.

Above is an example of steel plates being laminated to the back of flex circuit material. In some situations, it’s desirable for portions of the flex circuit to be stiff: either for mechanical mounting, or to help with SMT processing. I knew that it was possible to laminate polyimide stiffiners to flex, but I didn’t know that steel lamination was also possible until I took the factory tour.

Above is an example of the intricate shapes achievable with die cutting.

After visiting the factory, we decided the next step was to do a process capability test. The purpose of this test is to push the limits of the manufacturing process — intentionally breaking things to discover the weak links. Our design exercised all kinds of capabilities — long via chains, 3-mil line widths, 0201 components, 0.5mm pitch QFN, bulky components, the use of soldermask instead of coverlay, fine detail in silkscreening, captive tabs, curved cut-outs, hybrid SMT and through hole, Z-tape lamination, etc.

Below are some images of what our process capability test design looked like.

When I first presented the design to the factory, it was outright rejected as impossible to manufacture. However, after explaining my goals, they consented to produce it, with the understanding that I would accept and pay for all the units made including the defective units (naturally). Through analyzing the failure modes of the defective units, I was able to develop a set of design rules for maintaining high yield (and therefore lowering cost) on the circuit stickers.

In my next post on chibitronics, I’ll go into how we co-developed the final design and manufacturing process for the stickers. I will also have a post talking about how we developed the partial perforation die cut manufacturing process that enables the convenient peel-and-stick format for the stickers.

I’ll also have a post on why we decided to go with Crowd Supply instead of Kickstarter, and why we picked $1 as a funding goal.

by bunnie at November 18, 2013 08:49 PM

Richard Hughes, ColorHug

Offline Updates in Fedora 20

In GNOME 3.10 we’re encouraging more people to use the offline-update functionality which we’ve been using in Fedora for a little while now. A couple of people have told me it’s really slow, but I hadn’t seen an offline update take more than a minute or so as I test updates all the time. To reproduce this, I spun up a seldom-used Fedora 20 alpha image and let GNOME download and prepare all the updates in the background. I then added some profiling code to the pk-offline-update binary, and rebooted. The offline update took almost 17 minutes to run.

So, what was it doing all that time, considering that we’ve already downloaded the packages and depsolved the transaction:

Transaction Phase Time (s)
Start up PackageKit 0.3
Starting up yum 3
Depsolving 10
Signature Check 8
Test Commit 5
Install new packages 704
Remove old packages 168
Run post-install scripts 90

This is about an order of magnitude slower than what I expected. Some of my observations:

  • 10 seconds to depsolve an already depsolved transaction
  • 8 seconds to check a few hundred signatures
  • 168 seconds just to delete a few thousand files
  • over 10 minutes to install a few hundred RPMs seems crazy
  • 90 seconds to rebuild a few indexes seems like a huge amount of time

Some notable offenders:

Package Time to install (s)
selinux-policy-targeted 122
kernel-devel 25
libreoffice-core 21
selinux-policy 17
hugin 12

 

Package Time to cleanup (s)
gramps 11
wireshark-gnome 8
hugin 7
meld 6
control-center 5

Hopefully Fedora 21 will move to the hawkey backend, and we can get closer to raw librpm speed (which seems to be quite a speed boost) but even that is too slow. I’ll be looking into the individual packages this week, and trying to find what makes them so slow, and what we can do about them to speed things up.

by hughsie at November 18, 2013 09:34 AM

November 17, 2013

Bunnie Studios

Name that Ware November 2013

The Ware for November 2013 is shown below.

Clearly, it’s a magnified view of something…

by bunnie at November 17, 2013 03:59 PM

Winner, Name that Ware October 2013

The Ware for October 2013 is a blood pressure monitor, or rather, the pneumatic plumbing that controls the air pressure within the arm cuff. Below is a photo that shows the larger context of the ware.

Cheetah was the first to guess it correctly, congrats email me for your prize!

by bunnie at November 17, 2013 03:59 PM

November 16, 2013

Video Circuits

Pia Van Gelder

I have been meaning to post up some of Pia's work for a while now I originally found it through Stephen Jones as she has worked with him in the past (see videos below) she has ome interesting pieces
http://piavangelder.com/



Gelder
on Vimeo.


by Chris (noreply@blogger.com) at November 16, 2013 08:12 AM

November 15, 2013

OggStreamer

#oggstreamer – Batch Assembling 40pcs.(2) – THT Soldering

We received our PCBs from the manufacturer with all SMT Parts presolderd, but we still had to solder the THT Parts – The following two videos shows this process. Warning: You will see some improvised soldering … :)

Part 1:

Part 2:


by oggstreamer at November 15, 2013 02:48 PM

#oggstreamer – Release of OggStreamerSDK-RC1

here is the release of the OggStreamerSDK-RC1 – for the first time we are able to have a whole integrated SDK which supplies all the Files / Applications needed for the complete OggStreamer Firmware.

If you want to experiment yourself I put together the info on the Wiki:

OggStremaer SDK RC1

Note it is RC1 and still has a number of flaws, please have a look at our ticketsystem:

RC1 Flaws Tickets

happy OggStreamer – hacking ;)


by oggstreamer at November 15, 2013 02:47 PM

#oggstreamer – PCB Design Files converted to KiCAD

We proudly announce our KiCad Conversion of the original PCAD2006 Design.

You can download it from the repo here

Note: This version was tested with KiCAD (BZR4213 GOST) – The currently available Windows-Installer from kicad-pcb.org ( KiCad_stable-2013.07.07-BZR4022_Win_full_version.exe ) is known to make Problems parsing the PCB File (Schematics works fine though).


by oggstreamer at November 15, 2013 02:47 PM

#oggstreamer – Batch Assembling 40pcs.(3) – Frontpanel assembly

For the OggStreamer frontpanel I needed to come up with a solution to produce light-guides that direct the light from the VU-Meter, Power and On-Air LEDs. The first Idea was to use a 3D Printer and print this light-guides out of transparent PLA. But after trying to imaging how the light-guides would look like, I gave up on this Idea and developed a different process for this. Now I am using the transparent properties of Hot-Glue to act as light-guide and glue the LED-PCB in Place at the same time. The transparent Hot-Glue fills all the space of the CNC-punched holes in the aluminum front-panel. In order to produce a smooth surface I am using a glass plate.

Step 1: You will have to apply a lubricant on the glass plate to form a thin oily film so that the Hot-Glue doesn’t stick to good on the glass – which would make separating the completed assemble from the glass a pain in the a**.  (Notice the broken glass plate from our attempts without lubricant!!) WARNING: You are using a glass plate, which can break and have sharp edges. So be careful and don’t apply excessive force – If your assembly got stuck on the glass plate you can use a Hot-Air-Gun to separate it, clean it and repeat the process.
IMG_20130919_141429
Step 2: Evenly spread the lubricant – don’t wipe it of the plate, but try to produce a tiny but consistent film on the glass plate, without producing droplets.
IMG_20130919_141437
Step 3: Fix the aluminum front-panel  with office clips to the glass plate – adjust the office clips in a way so that they will help you aligning the LED-PCB.
IMG_20130919_141600
Step 4: Wait till your Hot-Glue Gun has reached steady temperature and begin applying the Hot-Glue just over the holes of Power and OnAir LED. Remember to do this and the following steps quickly, because you only have a limited time windows to apply the LED-PCB proper.
IMG_20130919_141710
Step 5: Do the same for the VU-Meter holes.
IMG_20130919_141715
Step 6: Once all glue is applied gently push in the LED-PCB – So that the still liquid Hot-Glue is pushed towards the glass plate.
IMG_20130919_141736
Step 7: The Hot-Glue is still liquid for a few seconds, you can use the time to turn around the glass plate to see if the LED-PCB is properly aligned and if needed adjust its position.
IMG_20130919_141505
Step 8: Let the assembly cool down – if you are producing more then one unit, you can use this time to prepare the next one.
IMG_20130919_141756
Step 9: Remove the cooled down assembly gently – you shouldn’t need to much force – as the applied lubricant forms a layer between the HotGlue and the glass.  In any case be careful – you are handeling a glass plate which has the chance to break – The glass plate you see in the picture broke because we were trying to seperate the assembly from the glass plate using a screw driver.
IMG_20130919_141847
Step 10: Now you can start installing the push button and the potentiometer – We start with the push button first. Take care not to forget about the elastics ring that comes with the push button.
IMG_20130919_141931
Step 11: Insert the push button from the TOP side.
IMG_20130919_141954
Step 12: Mount the Pushbutton with plastic Nut – the force of your fingers is enough to mount the plastic nut securly in place
IMG_20130919_142011
Step 13: Insert the Potentiometer-PCB from the BOTTOM side.
IMG_20130919_142101
Step 14: Place the washer and the Nut for the Potentiometer from the TOP Side. And gently fix it with the flat wrench.
IMG_20130919_142117
Step 15: Press the prepared Potentiometer Knob on the the Potentiometer – You might need to use a drill (6mm) to prepare the Knob. Only push the Knob with gentle force.
IMG_20130919_142152
Step 16: Use the corner of a Table (or something similar) to support the Potentiometer from the backside and apply a bit more force so that the Potentiometer Knob is securely mounted to the Potentiometer.
IMG_20130919_142228
Step 17: Glue the cable of the push button according to the picture. (optional)
IMG_20130919_152849
The final result:
IMG_20130919_153106

Although this process works very well – you need to take into account that gluing the PCB in Place makes it a bit harder to repair or replace. You will need to use a Hot Air-Gun to separate the aluminum front-panel from the LED-PCB, further you will need a little patience to remove the glue residue. But it is definitvly doable.


by oggstreamer at November 15, 2013 02:47 PM

Free Electrons

Updated version of our kernel driver development course: Device Tree, BeagleBone Black, Wii Nunchuk, and more!

BeagleBone Black connected to the Wii Nunchuk over I2C

In the last few years, the practical labs of our Embedded Linux kernel and driver development training were based on the ARMv5 Calao USB-A9263 platform, and covering the ARM kernel support as it was a few years ago. While we do regularly update our training session materials, with all the changes that occurred in the ARM kernel world over the last two years, it was time to make more radical changes to this training course. This update is now available since last month, and we’ve already successfully given several sessions of this updated course.

The major improvements and updates are:

  • All the practical labs are now done on the highly popular ARMv7 based BeagleBone Black, which offers much more expansion capabilities than the Calao USB-A9263 platform we were using. This also means that participants to our public training sessions keep the BeagleBone Black with them after the session!
  • All the course materials and practical labs were updated to cover and use the Device Tree mechanism. We also for example cover how to configure pin muxing on the BeagleBone Black through the Device Tree.
  • The training course is now centered around the development of two device drivers:
    1. A driver for the Wii Nunchuk. This device is connected over I2C to the BeagleBone Black, and we detail, step by step, how to write a driver that communicates over I2C with the device and then exposes the device functionalities to userspace through the input kernel subsystem.
    2. A minimal driver for the OMAP UART, which we use to illustrate how to interface with memory-mapped devices: mapping I/O registers, accessing them, handling interrupts, putting processes to sleep and waking them up, etc. We expose some minimal functionality of the device to userspace through the misc kernel subsystem. This subsystem is useful to expose the functionalities of non-standard types of devices, such as custom devices implemented inside FPGAs.

And as usual, all the training materials are freely available, under a Creative Commons license, so you can study in detail the contents of the training session. It is also worth mentioning that this training session is taught by Free Electrons engineers having practical and visible experience in kernel development, as can be seen in the contributions we made in the latest kernel releases: 3.9, 3.10, 3.11 and 3.12.

For details about cost and registration, see our Training cost and registration page.

by Thomas Petazzoni at November 15, 2013 05:01 AM

November 12, 2013

OggStreamer

#oggstreamer – OggStreamer Distribution

SONY DSC

We produced a limited number of devices and are now able to distribute those devices. If you are a developer working on related projects or want one device for your Medialab, Hackerspace, School, University or Radiostation.

Don’t hesitate to contact us (georg <at> otelo.or.at)


by oggstreamer at November 12, 2013 06:40 PM

November 07, 2013

Elphel

Elphel next camera – sample configuration

With all three of the new boards for the NC393 series cameras assembled (but only partially tested) it is now possible to connect them with the existent components and show some possible configurations. Main applications of Elphel cameras are scientific research, system prototyping, proofs of concepts designs – areas that routinely require unique configurations, and this new camera series will continue tradition of high modularity.

The camera boards look nothing like Lego blocks, but nevertheless they can zip together in different ways allowing to make new systems with minimal additional hardware. Elphel new design values our prior work (hardware development is still expensive) and provides compatibility with the existent modules, simultaneously enabling new features that were not previously possible, The most obvious example – sensor interface. The 10393 board is designed to accommodate our existent sensor front ends, custom flex cables of different lengths and shapes. That will help us to reduce the transition period to the new camera so we can focus on the high performance system board and port portions of the software and FPGA code, code that is already proven to work.

The same camera sensor ports will allow us to use multi-lane serial sensor connections needed for the modern high speed and high resolution devices, but we will work on this only after the first part will be done and we will be able to replace our current systems with the new ones. Implementation of the serial sensor connection has some challenges for us because the used protocols are not open and we have to rely only on the pieces of the available information and some reverse-engineering and research. It is not the most fun work to do, but being an Open Hardware/ Free Software company we will not provide our users with semi-open documentation. Our users will always be able to rebuild all the binaries from the source code – same binaries from the same code we have access ourselves. The only NDA Elphel ever signed was with Kodak – that sensor NDA had clear expiration time, so at the moment we planned to start distributing our products (and so the source documentation) we were not be bound by it anymore.

Sample configuration illustrated below combines the new and existent modules, the later have links to the design documentation on Elphel wiki. It is not so for the new boards (10393, 10385, 10389) – no circuit diagrams, parts lists or PCB layouts are publicly available when this post is being written. Hardware errors are usually much more expensive to fix, and we do not want somebody to duplicate our hardware “bugs” until we consider our products (“binaries”) to be good enough to go to our users. So while we set up public Git repository when we start software development, we publish our hardware documentation simultaneously with the start of the product distribution – together with “binaries”, not ahead of them.

Sample configuration of the electronic modules of Elphel next camera family

Sample configuration of the electronic modules of Elphel NC393 camera family


  • 1 – 10393 Multisensor camera system board based on Xilinx Zynq 7030 SoC.
  • 2 – 10385 Power supply board
  • 3 – 10389 Interface board
  • 4 – Inter-board power distribution: 6-pin (3 circuits) header on the 10385, receptacles on both 10393 and 10389
  • 5 – Inter-board signal connector: 40 pins (USB, SATA, GPIO)
  • 6 – mSATA SSD card
  • 7 – Processor heat sink (temporary). Production cameras will have custom heat spreader to transfer CPU/FPGA generated heat to the camera aluminum body or other heat sinks in multicamera systems
  • 8 – Ethernet (GigE) jack, РоЕ-compatible
  • 9 – DC power input (9-36V or 18-72V depending on application)
  • 10 – Memory card (can be used to boot the system for cold firmware update)
  • 11 – Micro USB B connector for system serial console with GPIO signals to select boot mode and generate system reset. Mounted on the 10393 system board
  • 12 – Micro USB A host connector for communication with external memory and I/O devices. Mounted on the 10389 interface board.
  • 13 – USB A/eSATA combo connector. eSATA port will be used for interfacing external storage devices (HDD, SSD) and downloading data from the camera internal SSD to the host computer. USB portion of the connector can provide power to the external device through the same cable as SATA data.
  • 14 – 2.5mm audio type connector for external synchronization input and output (opto-isolated and directly coupled)
  • 15,16,17 – directly connected sensor front ends. Compatible with the current 5MPix 10338 (shown) and other parallel data output sensors, programmable interface voltage. With the controlled impedance cables same ports will allow using up to 9 differential lanes plus I2C and 2 extra control signals.
  • 18,19,20 – sensor front ends connected through 2110359 multiplexer that allows simultaneous acquisition of images from up to 3 sensors into on-board SDRAM and then transferring them to the system board. In the future we will develop a faster multiplexer supporting serial links to the sensors and/or the system.
  • 22103695 – IMU adapter board, or other "granddaughter" extension board connected to the 10389 interface (daughter) board. Two 10-pin connectors provide 3.3V and 5.0V power, USB and 4 GPIO connected to the FPGA pads through high speed voltage level shifters
  • 23103696 – Serial GPS adapter board with 1pps input, uses another "granddaughter" port.
  • 24,25,26 – Inter-camera synchronization (daisy chain connection) for the systems with multiple camera boards located in the same enclosure, similar to the current Elphel Eyesis4pi cameras

The setup shown above is a sort of mockup – while all the components are real, we do not yet have software to run it, even to test it. So there is no sense in powering up such a system – nothing will happen. And there is a lot to be done before we will be able even to completely test the new hardware and prepare and release revision “A” of each of the prototyped boards. We plan to be ready by the middle of 2014.

by andrey at November 07, 2013 08:13 AM

November 05, 2013

Liu Xiangfu, openmobilefree.net

Install Xilinx(ISE 14.6) Platform Cable USB under Ubuntu 13.04 64bit

Let’s make is simple
I am using the Xilinx ISE 14.6. it will failed install the cable driver. we just ignore that error and doing those:

sudo apt-get install fxload gitk git-gui build-essential libc6-dev-i386 ia32-libs
cd /home/Xilinx #I like install them under /home
sudo git clone git://git.zerfleddert.de/usb-driver
cd usb-driver/
sudo make lib32
./setup_pcusb /opt/Xilinx/13.2/ISE_DS/ISE/
cd /lib/x86_64-linux-gnu/ && sudo ln -s libusb-0.1.so.4 libusb.so

Links may help:

  1. http://www.george-smart.co.uk/wiki/Xilinx_JTAG_Linux#Download_the_driver_source
  2. http://forums.xilinx.com/t5/Installation-and-Licensing/ISE-11-2-Impact-can-t-find-USB-II-cable-SLED-11-Linux-64-bit/m-p/42064?query.id=386680#M467

by Xiangfu Liu at November 05, 2013 12:34 AM

November 04, 2013

Free Electrons

Videos and slides of the Kernel Recipes 2013 conference

Kernel Recipes LogoAs we mentionned earlier on this blog, Free Electrons participated to the second edition of the Kernel Recipes conference in Paris, a two-days conference dedicated to kernel topics.

The videos and slides of the talks in this conference have now been published, see https://kernel-recipes.org/en/2013/conferences/ for the complete list. There is a good number of interesting topics: discussion about kernel development environment by Willy Tarreau, status of Nftables and Netfilter in general by Eric Leblond, a talk explaning how to decipher kernel oopses, a talk about Crosstool-NG from Yann E. Morin, a discussion about Linux Security Modules, a talk about the status of Display support in the kernel by Laurent Pinchart, and several lightning talks.

The talks from Free Electrons were:

Free Electrons really enjoyed this conference, and is looking forward to participating again next year. Thanks a lot to the organizers!

by Thomas Petazzoni at November 04, 2013 12:46 PM

November 03, 2013

ZeptoBARS

LM2940L 1A LDO regulator : weekend die-shot

UTC LM2940L-5.0 - 1A low-dropout linear regulator.
Apparently 5 contacts at the bottom-right were used to fine-tune output voltage by burning fuses between them.


November 03, 2013 10:27 PM

Elphel

NC393 development progress – testing the hardware

10393 board, memory side

10393 board, memory side

We received the first prototype of the 10393 rev.’0″ – the new camera system board with all the BGA chips mounted. It took a little longer as our PCB assembly manufacturer had to order solder paste stencils as some chips (DC-DC converter module in LGA package and QFN chips with central thermal pads) required more than just applying tacky flux and running them through the reflow oven. The photo shows the 10393 system board together with the 10385 power supply board that I assembled earlier while waiting for the main one. This time the power supply is a separate module so we’ll not need different system board versions for different power supply options as we do with Elphel current NC353.

The shown prototype version has the full functionality, including РоЕ – feature that we will not offer in the production cameras to stay out of trouble with the patent trolls. As soon as the relevant patents will be ruled invalid we will be able to build such boards, but currently the cameras will be powered through the regular barrel-type DC jack or the 4-pin Molex connector in the multi-camera systems like Eyesis. 10385 also has a low-leakage (few microamps idle consumption) switch to use the battery-powered camera in remote locations, controlled by the system clock powered by a super-capacitor (not yet installed – there is an empty space with “+” sign on visible on the photo).

10393 with 10385 board, SoC side

10393 with 10385 board, SoC side

I finalized the 10393 board assembly installing other components including couple hundred (bragging again) 0201 resistors and capacitors. Before starting I tested the resistance (lack of shorts) between the ground and power rails to make sure that I did not screw up pinouts during schematic/PCB design and so the board revision “0″ has a chance to be successfully tested. I repeated those tests while installing components as a power-to-ground shorts are rather difficult to locate as there are so many tiny capacitors between them.

With assembly done the board was ready for the first “smoke” test – power it up while controlling the power consumption (I used a regular test bench power supply instead of the 10385 to provide the primary 3.3V power). I was turning power on for just a few seconds controlling the secondary voltages (1.0V, 1.8V and 1.5V) with the oscilloscope. After fixing a bad soldering on the intermediate “power good” pullup resistor (secondary voltages are supposed to come up in a prescribed sequence) all 3 of these voltages were up, measured OK and the board consumed 320 mA with the system reset released but no firmware to run. There are several additional DC-DC converters on board (5V for USB and 2 independently software-regulated voltages for the external boards (sensor front ends in most applications), but these converters are turned on by the software and I did not have any at the moment.

10393 board, SoC side

10393 board, SoC side

Photos show the heat sink and a fan attached to aluminum angle, not directly to the Zynq chip. In production camera there will be a custom heat sink (no fan) between the 10393 and the optional 10389 interface/storage board, it will transfer processor heat to the camera aluminum body and the on-chip thermometer will be used to monitor the temperature and prevent overheating. Rather large temporary heat sink will be used during development (not to depend on the temperature monitoring software), thin angle part will allow to test the 10389 board that will nearly touch the other surface of the aluminum plate.

The next thing to test was to make the CPU (Xilinx Zynq XC7Z030-1FBG484C) run and test the DDR3 memory. If this core of the system is operational, we can test the peripherals one by one, and failures in some of them would not prevent testing of the others. If the core would fail – we’ll have to try to find out (or just guess) the problem and redesign the board, order new ones, have new stencils, assemble and try again. Of course we’ll need to re-spin the board before the production units manufacturing, but I hoped that just the next revision will be good enough to go to the users, that changes will be small. I wrote “guessed”, because if the problems would be related to the DDR3 memory operation the means to troubleshoot them would be limited – the data and address/command lines are completely buried between the chips – memory is placed directly opposite to the Zynq SoC. There are no resistor terminations on the address/command lines, the DQ lines are swapped in each byte group and the byte groups are also swapped. I relied on Xilinx documentation that they OR-ed the data lines during write leveling, so the DQ swapping will not harm this functionality.

Skipping the requirement for the address line termination allowed the overall design to be compact and the connections themselves to be really short (actually shorter than the lines inside the SoC chip itself). I used Micron documentation when considering such solution, but it still needed to be tested on the real board. Such component placement allowed me to make average length of the address/command traces 15.5mm, individual traces had to be shortened/extended to keep combined PCB delays and internal SoC pin delays the same for each address/command and for each member in the byte group for data. Internal DDR3 chip delays do not need to be considered as they are balanced inside the package. Data connections lengths (they are just peer-to-peer, no split for the two memory chips as for address/command lines) are even shorter – they average from 8.5mm to 14.5 mm for different byte groups.

Additional challenge for the initial breathing life in this new board was that we did not have the proven code to run on it, something we had for the Avnet MicroZed board while developing the free software bootloader to replace the Xilinx proprietary one. So that was a real test for our code and I decided to never even try the proprietary one on the new system.

The 10393 board has no LED (not to count 2 Ethernet jack ones, but they are controlled by the Ethernet PHY), so I temporary borrowed one GPIO signal from the MDIO bus (Ethernet PHY control) to be able to step through the boot process not relying on the serial console to be operational. I just put the LED there without any transistor, so the 1.8V-powered diode was really dim, but that was OK. And the serial output turned out to be alive immediately so there was no real need for that debug tool and I was able to remove those extra wires. The board got to U-Boot prompt immediately, but unfortunately – not every time. So I had to spend several days (one of them because of just the faulty micro-SD card that silently replaced one sector with garbage even when read back by the computer) figuring out the instability. I still do not understand exactly what is wrong (it happens when the relocated code switches the memory mapping and copies itself back to the low addresses), but just adding delay by copying that range twice resolved the issue, it turned out to be software-related one as it was present when running other (proven) boards also, not just with the 10393.

The core of the system is now verified, automatic write leveling and the two other hardware-implemented memory training functions produce reasonable results and the delay settings seem to be rather forgiving. That confirms the PCB design and makes it possible to move forward with testing of the other peripherals and starting the FPGA part of the design.

There are other urgent projects at Elphel I have to be involved now, so not yet working on the NC393 full time, but this makes really good news for us to pass the important test. Booting the new board with just the free software, no proprietary tools at all – it is also very encouraging. Xilinx just released the new version of the tools, the human-readable (html) part of the FSBL output looks even fancier than that of Ezynq, but I believe ours is still more convenient to work with – we made it for ourselves, and so for other developers (who are like us) too.

by andrey at November 03, 2013 05:04 AM

November 02, 2013

Video Circuits

Scott Kiernan

A very smart utilisation of generation loss from Scott
"12 generations of dubbing builds each successive entrance/exit."
http://www.scottkiernan.com/

Entropic Door from Scott Kiernan on Vimeo.

by Chris (noreply@blogger.com) at November 02, 2013 02:42 AM

Andrew Zonenberg, Silicon Exposed

Managed DC PDU

As I mentioned in my last post, powering all of the prototyping boards on my desk presents some unique challenges. With only one exception (the Xilinx AC701 board), each of the 22 boards requires 5VDC at somewhere between 0.1 and 2 amps. Some are strictly USB powered, some have a 5.5/2.1mm barrel jack, and some can be powered by either USB or a barrel jack.

Powered USB hubs would reduce the number of power sources required, so I did just that. Lots of cables would get in the way so I designed a custom "backplane" USB hub with male mini-B ports which could plug directly into small prototyping boards. (As a side note, the connectors for this board were nearly impossible to find. There are very few uses for a male mini-B connector that mounts to a PCB rather than being attached to a cable so nobody makes them!)

USB backplane hub
These reduced the problem, but did not come close to eliminating it. I still had to power three backplane hubs, six standalone FPGA boards, and four standalone MCU/SoC dev boards. All needed 5V except for the AC701 (which runs on 12V) but I wanted additional 12V capability for the future if I expanded into higher-power design.

The obvious first idea was an ATX supply. My calculations of peak power for the apparatus (including room for growth) were fairly high, though, and most ATX supplies put the bulk of their output on the 12V rail and have fairly limited (well under 100W) 5V capacity.

The next thing I considered was an off-the-shelf 5V supply. This looked like a nice idea, but (as with an ATX supply) the high output current capability would represent a fire hazard if something shorted. I would obviously need overcurrent protection.

Thinking a bit more, I realized that fusing was probably not the best option. Fuses need to be replaced once blown and in a lab environment overcurrent events happen fairly often. Classical current limiting techniques would be problematic as well since many of my boards have switching power supplies. Since a switcher is a nonlinear load, reducing the input voltage doesn't actually reduce the current. Instead, load current actually increases to maintain the output voltage, which can lead to runaway failure conditions. The safer way to handle overcurrent on a switcher is to shut it down entirely.

I also wanted the ability to power cycle boards on command to reset a stuck board or test power-up behavior. While jiggling cables may work in a hands-on lab environment, it isn't a viable option in the remote-controlled "embedded cloud" platform I'm trying to build.

This would obviously require some intelligence on the part of the power management system. The natural solution was a managed power distribution unit (PDU) of the sort commonly used in datacenters for feeding power to racks of servers. Managed PDUs often include current metering as well, which could be very useful to me when trying to minimize power consumption in a design.

There's just one problem: As far as I can tell, nobody makes managed PDUs for 5V loads. The only ones I saw were for 12/24/48V supplies and massively overpriced: this 8-channel 12V unit costs a whopping $1,757.

What to do? Build one myself, of course!

The first step was to come up with the requirements:
  • Remote control via SNMP
  • Ten DC outputs fed by external supply
  • 4A max load for any single channel, 20A max for entire board
  • Independent overcurrent shutdown for each channel with adjustable threshold
  • Inrush timers for overcurrent shutdown to prevent false positives during powerup
  • Remote switching
  • Current metering
  • Thermal shutdown
  • Under/overvoltage shutdown
  • Input reverse voltage protection
  • Able to operate at 5V or 12V (jumper selected)
Now that I had a good idea of what I was building, it was time to start the actual design. I decided to use an FPGA instead of a MCU since the parallel nature made it easy to meet the hard-realtime demands of the overcurrent protection system. I also wanted an opportunity to field-test my softcore gigabit-Ethernet MAC, one of my CPU designs, and several other components of my thesis architecture under real-world load conditions.

PDU block diagram

The output stage is key to the entire circuit so it was very important that it be designed correctly. I put quite a bit of effort into component selection here... perhaps a bit too much, as I missed a few bugs elsewhere on the board! More on that later.

Output stage
Working from the output terminal (right side, VOUT_1) we first encounter a a 5 mΩ 4-terminal shunt resistor which feeds the overcurrent shutdown circuit and current metering. This is followed by a an LC filter to smooth the output power and reduce coupling of noise between downstream devices.

The fuse is provided purely as a second line of defense in the event that the soft overcurrent protection fails. As a firmware/HDL developer I know all too well what bugs are capable of, so I like to include passive safeguards whenever reasonably possible. Assuming that my code works correctly, this fuse should never blow even if the output of the PDU was connected to a dead short. (This of course requires that my protection mechanism trip faster than the fuse. Given the 1ms response time of typical fuses to small overcurrents, this isn't a very difficult task.)

Power switching is done by a high-side P-channel MOSFET connected to VOUT (the main high-current power rail). The logic-level input from the control subsystem is shifted up to VOUT level by an N-channel MOSFET. A pullup and pulldown resistor ensure that the output is kept safely in the "off" state when the system is booting.

Current monitoring
The monitoring stage is even simpler: the shunt voltage is amplified by a TI INA199A2 instrumentation amplifier, then fed to an ADC (not shown in this schematic) for metering. A comparator checks the amplified voltage against a reference voltage set by a DAC (also not shown) and if the threshold is exceeded the overcurrent alarm output is asserted.

A module in the FPGA controls the output enables based on the overcurrent flags and internal state. When an output is first turned on the overcurrent flag is ignored for a programmable delay (usually a few ms) in order to avoid false triggering from inrush spikes. After this period, if the overcurrent flag is ever asserted the channel is turned off and placed in the "error-disable" state. In order to clear an error condition the channel must be manually cycled, much like a conventional circuit breaker.

Here's a view of the finished first-run prototype. As you can see the first layout revision had a few bugs ;) The dead-bugged oscillator turned out to not be necessary but it would have been more work to remove it so I'm keeping it until I do a respin with all of these fixes incorporated.
PDU board on my desk
The SNMP interface and IP protocol stack runs on a custom softcore CPU of my own design. The CPU is named GRAFTON, in keeping with my tradition of naming my processors after nearby towns. It is fairly similar to MIPS-1 at the ISA level and can be targeted by mips-linux-gnu gcc with carefully chosen flags, but does not implement unaligned load/store, interrupts, or the normal coprocessors. Coprocessor 0 exists but is used to interface with the RPC network.

GRAFTON's programming model is largely event-driven, in a model that will be somewhat familiar to anyone who has done raw Windows API programming. The CPU sleeps until an RPC interrupt packet shows up, then it is processed and it goes back to sleep. Unlike classical interrupt handling, user code running on GRAFTON cannot be pre-empted by an interrupt; it just sits in the queue until retrieved.

int main()
{
//Do one-time setup
Initialize();

//Main message loop
RPCMessage_t rmsg;
while(1)
{
GetRPCInterrupt(&rmsg);
ProcessInterrupt(&rmsg);
}

return 0;
}

RPCFunctionCall(), a simple C wrapper around the low-level SendRPCMessage and RecvRPCMessage() functions, abstracts the RPC network with a blocking C function call semantics. Any messages other than return values of the pending call are queued for future processing.

In the example below, I'm initializing the SPI modules for the A/D converters with a clock divisor computed on the fly from the system clock rate.

void ADCInitialize()
{
//SPI clock = 250 KHz
RPCMessage_t rmsg;
RPCFunctionCall(g_sysinfoAddr, SYSINFO_GET_CYCFREQ, 0, 250 * 1000, 0, &rmsg);
int spiclk = rmsg.data[1];
for(unsigned int i=0; i<3; i++)
RPCFunctionCall(g_periphAddr, PERIPH_SPI_SET_CLKDIV, spiclk, i, 0, &rmsg);
}

The firmware is about 4300 lines of C in total, including comments but not the 1165 lines of C and assembly in my C runtime library shared by all GRAFTON designs. It implements IPv4, UDP, DHCP, ARP, ICMP echo, and SNMPv2c. SNMPv3 security and IPv6 are planned but are on hold until I move firmware out of block RAM and into flash so I have some space to work in. Other than that, it's essentially feature-complete and I've been using the PDU in my lab for a while while working on my flash controller and some support stuff.

The PC-side UI, intended to control several PDUs, is written in C++ using gtkmm and communicates with the board over SNMP. One tab (not shown) contains summary information with one graph trace per PDU.

PDU control panel
With a few minutes of PHP scripting I was also able to get my Munin installation to connect to the PDU and collect long-term logs even when I don't have the panel up.

Munin logs of PDU
The board runs quite cool, the spikes of heat caused by my furnace kicking in are quite visible and dwarf thermal variations caused by changes in load.

It needs a little bit more work to be fully production-ready but is already saving me time around the lab.

My desk with the PDU installed
Here's a look at my desk after deploying the PDU. The power cable mess is almost completely gone :) I do need to tidy up the Ethernet cables at some point, though...

by Andrew Zonenberg (noreply@blogger.com) at November 02, 2013 01:28 AM

November 01, 2013

Richard Hughes, ColorHug

GNOME Shell and GNOME Software

The ever-awesome Matthias Clasen added a nice feature to GNOME Software a couple of weeks ago:
gnome-software-shell-search
It’ll be available in GNOME 3.12 in a few months time.

by hughsie at November 01, 2013 03:32 PM

Upstream adoption of AppData so far

By popular request, some update on the upstream adoption of AppData so far:

Applications in Fedora with long descriptions: 168 (9%)
Applications in Fedora with screenshots: 140 (7%)
Applications in GNOME with AppData: 60 (50%)
Applications in KDE with AppData: 1 (1%)
Applications in XFCE with AppData: 0 (0%)

You can look at a few ways:

  • We’ve made significant progress in the last year-or-so and many popular applications are already shipping the extra data.
  • There are a lot of situations where the upstream authors do not know what an AppData file is, don’t have time to add one, or simply do not care.
  • GNOME is clearly ahead of KDE and XFCE, probably because of the existing GNOME Goal and my nag emails to the desktop-devel mailing list. A little thing to bear in mind is that Apper (the KDE application installer) can also make use of the AppStream data, so this is a little disappointing for KDE users who probably don’t see any difference at the moment.

So where do we go from here? Clearly KDE and XFCE have some catching up to do, and I need someone familiar with those communities to lead this effort. There is also a huge number of upstreams that need a little push in the right direction, and I’ve been trying to do that for the last couple of months. Without help, this would be a never-ending battle for me. A little reminder: In GNOME 3.12 we are penalising applications that don’t ship AppData by including them lower in the search results, and in GNOME 3.14 we’re not going to be showing them at all.

If you’re interested to see all the applications shown by default in Fedora 20, I’ve put together this page showing a quick overview. If you see anything there that shouldn’t be an application and needs blacklisting, just let me know. If you see an application you care about without a long description or screenshots, then please file a bug upstream pointing them at the AppData specification page. Thanks.

by hughsie at November 01, 2013 10:05 AM

October 31, 2013

Bunnie Studios

Name that Ware, October 2013

The Ware for October 2013 is shown below.

This month’s objective is to identify what piece of equipment this is a part of, rather than the identity of the specific sub-components shown in the photo.

by bunnie at October 31, 2013 12:10 PM

Winner, Name that Ware September 2013

September’s ware was guessed in a flash. I didn’t expect it would be so easy — I had posted just one side of the controller board originally (the other images were added after the correct guesses started rolling in). It seems many folks have seen the Leap Motion controller’s guts already. Plum33 was the correct first guess, email me for your prize!

by bunnie at October 31, 2013 12:10 PM

October 30, 2013

Bunnie Studios

Qué romántico!

Nothing says “I love you” quite like a fake ON-semi 16-pin SOIC.

Mitch Davis sent me this photo, posted in a Chinese-only trade group chat room for chip sellers in Huaqiangbei. The poster said, “Does anyone know who supplies this chip, my customer needs it urgently!”

I figure if you can put any fake markings on any chip, this would be a romantic way to give a sly wink to that girl in the material quality inspection office you’ve had eyes on. Now all we need are “will you marry me” chips: “Hey darling, can you help me rework this board? I can’t quite make out the part number on this chip…” Now the hard part is, what chip would be most appropriate for the big question?

by bunnie at October 30, 2013 06:48 AM

Andrew Zonenberg, Silicon Exposed

Desktop raised floor

It's been a while since I've posted about a project I've done rather than a tool or some of my reversing work. This one is purely mechanical too!

First, a little background. I have a lot of FPGA/CPLD/MCU dev boards on my desk. By "a lot" I don't mean two or three... more like 20. Powering this much hardware presents some interesting problems. I don't have that many USB ports (and many of them need more power than USB can provide). Wallwarts are another obvious solution, but I don't have enough outlets or wallwarts to power 20 boards either!

I made three bar-shaped USB hubs with male mini-B ports, to plug into small development boards backplane-style. This helped a bit, but as my collection of boards grew the situation got worse.

By last May, my desk looked something like this:

My desk full of cables
Despite extensive efforts to manage the cable disaster with split tubing, there was still a giant octopus. Worse yet, my power strips were full and half of my boards didn't even have power.

The first step was to replace the loose boards with a datacenter-style "raised floor". I bought a 2x3 foot sheet of clear blue acrylic from McMaster-Carr, carefully floorplanned where all of the boards would go, and then drilled holes for each board's mounting standoffs.

Drilling holes
This operation had to be done out on the kitchen table because my office was too small to work comfortably in.

Mounting USB hubs
I mounted all of the USB hubs to the underside of the board in order to save space on top for dev boards and things I was likely to need to probe. While this seemed a good idea at first, reaching underneath them to run cables was a little tricky. After finishing the build I replaced the legs with ones several inches longer to provide the necessary hand clearance.

Before running cables, I attached all of the boards and brought it back to my desk to test the fit.

The apparatus on my desk
The "hostnames" on labels below each board are used as node names for my batch scheduler and unit test framework (more on that in a future post). In addition, those boards with Ethernet interfaces are assigned a constant IP address by my DHCP server, recorded in DNS with that hostname so I can write test cases using hostnames instead of raw IP addresses.

In an effort to reduce cable mess, I made custom cut-to-size USB cables out of cat5 cable and soldered on USB plugs. This was a very slow and laborious process because the connectors tended to melt very easily no matter what temperature I ran the iron at. BGA is no problem for me but these connectors gave me a hard time; I had yields somewhere around 60-70% even after rework. The rest of the time the connectors were melted beyond repair.

Despite the pain, I think the results were worth it. I was a little worried about signal quality as USB is supposed to be 90 ohm Zdiff and cat5e is 100, but I've noticed no problems. I did try to find 90 ohm cables but had trouble locating any.

Custom USB cables
After running all of the cables I could, a few of the boards were still unpowered and there were wallwarts everywhere, but the data wiring was a bit neater. Definitely a step in the right direction, but more work was needed.
After initial deployment

After taking that picture, I replaced most of the red electrical tape with zip ties and stick-on mount points. This made the setup a lot neater but I don't have any photos of that handy.

In order to tidy it up properly, I needed to tackle the power problem. My solution to that is a bit of a long story so I'll save that for next post :)

by Andrew Zonenberg (noreply@blogger.com) at October 30, 2013 04:11 AM

October 29, 2013

Elphel

Quadrotor copter with machine vision for contest

This page gives brief overview of multirotor UAV platform called “Tau”, which is built specially for participating in flying robots contest which is established by russian Croc company. For now contest has only russian participants, probably because it was made for a first time.

4

Our team name was “Autonomous aerospace”. We are from Krasnoyarsk, 1M people city in Siberia. We had experience in UAV airplane development and manufacturing. We’ve grew up from student and postgraduate student university (SFU) scientific team to startup company.

Doing contest machine we were not looking for easiest way of implementation. Some of the purposes are: further developing of our autopilot and getting experience of integrating machine vision functionality in real-time into control loop.

During contest preparation we dealed for a first time with multyrotor platform . There was only airplanes autopiloting experience before. Adopting autopilot for quadrotor was not so obvious as we expected, but we succeded. Now proudly can say, that we built first quadrotor which calculates all the navigation and control math under QNX real-time operating system :) . At least no one did any crazy stuff like this before :)

Mission

Mission is to take off from start marker, follow simple maze toward finish marker, touch down within its contour and than fly back. Then landing on start marker and cutoff engines. On path to target random barrier is set. It can be moved by organizators across the wall and gate might be aligned at left, at right or anywhere between walls.

p1.2_en

Drone is allowed to touch walls, but not allowed to touch the ground.

On-board UAV control system

tau_en

Computers

Central control unit is autopilot AP-05 (AP). It has embedded inertial navigational system (INS), air data system (ADS), global navigational satellite systems GLONASS/GPS (GNSS). Computer in AP-05 – is ARM9 family processor with 400MHz clock frequency and 64 megabytes of RAM. Operation of computer is conducted under  QNX Neutrino real time operating system (RTOS) control. QNX is used under academic licence. Major point is implementation of navigational and control loop under QNX by separate processes: fnav for navigation, fcont for control. Loop frequency is 200 Hz.

Decicions for flight in contest maze is made in autopilot by setting input values for roll, pitch and yaw PID regulators.
Machine vision computer (MVC) is i.MX6Q SABRE lite board with 4 processors of Cortex-A9 archetecture. For the research of QNX technologies machine vision is also computed under QNX.
Connection between AP and MVC is made by Ethernet via native qnet protocol.
For the programmer is gives transparency and flexibility, all interprocess communication is unix-like locally or remotely by qnx messages. Local is conducted by kernel, remote by kernel+qnet.

 

Sensors

As a proximity sensors ultrasonic rangefinders SRF08 are used. They are mounded at bumper each for front, rear, left, right sides accordingly. Same sensor is used for altimetry. Sensors are connected to i.MX6Q SABRE lite (MVC) via I2C interface  to the same bus with different adresses. Doing altitude and wall navigation control loop over such a long way looks weird. All because AP doesn’t have external I2C due to its noise vulnerability. Process which polls range finders reflects data to the system by /dev/fsrf resource manager. Autopilot reads this data over qnet stack like /net/mvc/dev/fsrf file. After reading by navigational process range data is filtered and after reflected as feedback for altitude control and wall avoidance algorithm.

When we were looking for camera main problem was making an software interface for OpenCV in QNX. Making port of webcam USB interface to QNX in a short time seemed impossible, because of lack of knowledge in that field.
Thats why search for camera was narrowed only on IP cameras. Finally Elphel NC353L was found. It has several software interfaces for image: MJPEG over RSTP; HTTP. Camera has opened sources, so it seemed guaranteed way to make own low level protocol and image pre-processing.

Also camera has multiply configurational parameters for optimizing real time picture. Additionally matrix has higher resolution, than other cameras in same price segment.
With understanding that camera is open sourced we estimated our chances to miss appropriate solution as very low and this estimation was correct =).
Calculation of machine vision algorithm is conducted by process called fmv, and its discrete results is represented at /dev/fmv resource manager.

 

Machine vision

Start finish markers search

Searching for start/finish points is done by comparison of current image colour histograms with histograms of reference images. Histograms for B,R,G channels was compated accordingly, and then integral weighted estimation of similarity was calculated. Similarity is calculated separately for start and finish markers.

Stereo vision

For the barrier gate entrance we initially decided to implement stereo vision algorithms to determine its position. At the beginning of contest preparations width between walls on final approach to finish marker supposed to be 20 meters. It seemed challenging to find gate with 3m width. Thats why we decided to integrate Elphel NC353L solution. This version has multiplexor board, which simultaniously gather both sensor data to single image. Stereo camera was generously provided us by Elphel company to participate in contest.

We had previously tested semi-global block matching algorithm (SGBM). Method gives disparity map from two images. Using SGBM method, requiers distortion remap and aligning preprocessing of input images. Using matrices of internal parameters of cameras we performed images rectification, so left image row coincides with rows of right image. Experimentally we tuned scene parameters and looked for optimal diversity map. Diversity map has same dimentions as input images, but consist of 16 bit depth values. Seeing on single row in the middle of image, selected by INS to fit horizon we recoverd distance to near objects and supposed to determine gate.

 

Multicopter UAV Tau frame design

Starting from the design…

For compact setting of all required devices we decided to make central frame with 3 levels. Each level is milled carbon fiber plate.  1
 2 Level plates are fitted together by aluminium spacers. Between first and second levels there are carbon beams that are tighten between aluminium clamps.
At the end of each beam motor is mounted using aluminium brackets. Motors are working with 12″ x 4.5 propellers. 3
4 For the protection of propellers and equipment special bumper was made. 4 parts form closed perimeter. Bumper part has U-like cut and made of carbon 3 layer composite sandwich. Mounding of bumper is made by Г-like bracket, which is fixed at bottom of motor mount.
After design process production and assembly started. Fristly carbon fiber plates and beams were baked. Parallely all aluminium parts were milled. On preparated plates we milled them on CNC. Then molds for bumper and brackets were milled. 5

After all assembly started!
In a five days we fit everything together and made wiring of all devices.
Design of airframe in STEP format is freely avaiable: with all equipment and as plain frame.

 6  7  8

 

Flight testing 

When everything were done on assembly 10 days before contest begin left. Actually we had flight test platform before, so we started not from scratch in a flight software.

Previous results were got on fiber glass strong frame before. Some explanations are made on russian in following videos:

After contest drone assembly we spend 5 days to make it flight properly: maintain attitude and regulate distance from the walls.

Next five days we spent to test all mission algorithm in a combination with machine vision and real markers. We’ve got some sucessful complete tests, but all system was very unstable. Most of the problems was about flying. A lot of time was eaten by i2c rangers problems: high current of motors and vibration were making contact and ground potential unstable, and it lead to bus stuck. When bus stuck, altimeter is also stucks, what was leading to engines turn off. Many thanks for our designers and all mechanical shop. In dozens of fallings we’ve once broke bumper braket, and one leg.

Algorithm for maze flying is classical, keep right, keep distance from the walls and pray :) . We do not making turns, UAV maintains yaw, which is set on initial alignment. And it is aligned by rear side toward right direction at start. So it begins to fly backwards, than left, then front. And on a flight back – in reverse.

Fly front means to hold distance from front wall. When wall is far, front ranger is saturated in its maximum value, so regulator moves drone forward, by tilting its pitch front.

 

Contest video

In a real contest (sizes were officially corrected) distance between final approach walls became 5 meters, so finding gate was not a such big problem anymore. So barier detection was made in autopilot by finite state machine. If front stereo camera (by one of its eye) have seen ellipse in front of it, that means we have passed the gate and must see marker soon by looking down camers. If no, we probably holding right now distance from the barrier wall and must move left.

First attempt 

It was failed because of improper finite state machine criterion for barrier avoidance. Drone thought that it has reached barier and next cycle it thought it has reached front wall at marker, didn’t find any markers and turned back.

 

Second attempt

Here we have our machine vision algorithm failed. Camera didn’t recognized landing marker, so drone tryed to find on the way back and it was dead end of algorithm.
As always there were just a question of two days of debugging to make everything right :)

 

Conclusion

We have not completely succeeded, but we have not failed.
Our team dramatically improved existed software and developed new direction – machine vision.
That was great teamwork experience, that charged our team to handle further challenges.

 

Update 30.10.2013:

During posting this text, new contest was announced for a 2014. We going to  create new team of only students for doing new contest mission with already prepared machine. Now we have chance to get initial ideas realized.

by flight-machine at October 29, 2013 03:54 PM

ZeptoBARS

KR1858VM3 - last soviet Z80 : weekend die-shot

KR1858VM3 - is the last soviet Z80. This part was manufactured on Belorussian "Transistor" fab in 1995.

While previous soviet Z80 were NMOS ones, this is 2µm CMOS. But due to "relaxed" layout (in addition to intrinsic lower logic density in CMOS) die size is even larger than 4µm NMOS variant KR1858VM1.

Die size 5050x4657 µm.

October 29, 2013 10:28 AM

October 28, 2013

Altus Metrum

keithp&#x27;s rocket blog: Quaternions

Tracking Orientation with Quaternions

I spent the flight back from china and the weekend adding orientation tracking to AltOS. I’d done a bit of research over the last year or so working out the technique, but there’s always a big step between reading about something and actually doing it. I know there are a pile of quaternion articles on the net, but I wanted to write down precisely what I did, mostly as a reminder to myself in the future when I need to go fix the code…

Quaternion Basics

Quaternions were invented by Sir William Rowan Hamilton around 1843. It seems to have started off as a purely theoretical piece of math, extending complex numbers from two dimensions to four by introducing two more roots of -1 and defining them to follow:

i² = j² = k² = ijk = -1

Use these new roots to create numbers with four real components, three of which are multiplied by our three roots:

r + ix + jy + kz

With a bit of algebra, you can figure out how to add and multiply these composite values, using the above definition to reduce and combine terms so that you end up with a set which is closed under the usual operations.

Then we add a few more definitions, like the conjugate:

q = (r + ix + jy + kz)
q* = (r - ix - jy - kz)

The norm:

| q | = ✓(qq*) = ✓(r² + x² + y² + z²)

‘u’ is a unit quaternion if its norm is one:

| u | = 1

Quaternions and Rotation

Ok, so we’ve got a cute little 4-dimensional algebra. How does this help with our rotation problem? Let’s figure out how to rotate a point in space by an arbitrary rotation, defined by an axis of rotation and an amount in radians.

First, take a vector, ‘v’, and construct a quaternion, ‘q’ as follows:

q = 0 + ivx + jvy + kvz

Now, take a unit quaternion ‘u’, which represents a vector in the above form along the axis of rotation, and a rotation amount, ω, and construct a quaternion ‘r’ as follows:

r = cos ω/2 + u sin ω/2

With a pile of algebra, you can show that the rotation of ‘q’ by ‘r’ is:

q° = r q r*

In addition, if you have two rotations, ‘s’ and ‘r’, then the composite rotation, ‘t’, a rotation by ‘r’ followed by ‘s’ can be computed with:

q°° = s (r q r*) s*

    = (sr) q (r*s*)

    = (sr) q (sr)*

t   = s r

q°° = t q t*

That’s a whole lot simpler than carrying around a 3x3 matrix to do the rotation, which makes sense as a matrix representation of a rotation has a bunch of redundant information, and it avoids a pile of problems if you try to represent the motion as three separate axial rotations performed in sequence.

Computing an initial rotation

Ok, so the rocket is sitting on the pad, and it’s tilted slightly. I need to compute the initial rotation quaternion based on the accelerometer readings which provide a vector, ‘g’ pointing up. Essentially, I want to compute the rotation that would take ‘g’ and make it point straight down. Construct a vector ‘v’, which does point straight up:

g = (0, ax, ay, az) / norm(0, ax, ay, az)
v = (0, 0, 0, 1)

G is ‘normalized’ so that it is also a unit vector. The cross product between g and v will be a vector normal to both, which is the axis of rotation. As both g and v are unit vectors, the length of their cross product will be sin ω

a = g × v

  = u sin ω

The angle between g and v is the dot product of the two vectors, divided by the length of both. As both g and v are unit vectors, the product of their lengths is one, so we have

cos ω = g · v

For our quaternion, we need cos ω/2 and sin ω/2 which we can get from the half-angle formulae:

cos ω/2 = ✓((1 + cos ω)/2)
sin ω/2 = ✓((1 - cos ω)/2)

Now we construct our quaternion by factoring out sin ω from the ‘a’ and:

q = cos ω/2 + u sin ω sin ω/2 / sin ω

Updating the rotation based on gyro readings

The gyro sensor reports the rate of rotation along all three axes, to compute the change in rotation, we take the instantaneous sensor value and multiply it by the time since the last reading and divide by two (because we want half angles for our quaternions). With the three half angles, (x,y,z), we can compute a composite rotation quaternion:

   cos x cos y cos z + sin x sin y sin z +
i (sin x cos y cos z - cos x sin y sin z) +
j (cos x sin y cos z + sin x cos y sin z) +
k (cos x cos y sin z - sin x sin y cos z)

Now we combine this with the previous rotation to construct our current rotation.

Doing this faster

If we read our sensor fast enough that the angles were a small fraction of a radian, then we could take advantage of this approximation:

sin x ≃ x
cos x ≃ 1

that simplifies the above computation considerably:

1 + xyz + i (x - yz) + j (y + xz) + k (z - xy)

And, as x, y, z « 1, we can further simplify by dropping the quadratic and cubic elements as insignificant:

1 + ix + jy + kz

This works at our 100Hz sampling rate when the rotation rates are modest, but quick motions will introduce a bunch of error. Given that we’ve got plenty of CPU for this task, there’s no reason to use this simpler model. If we did crank up the sensor rate a bunch, we might reconsider.

Computing the Current Orientation

We have a rotation quaternion which maps the flight frame back to the ground frame. To compute the angle from vertical, we simply take a vector in flight frame along the path of flight (0, 0, 0, 1) and rotate that back to the ground frame:

g = r (0 0 0 1) r*

That will be a unit vector in ground frame pointing along the axis of the rocket. The arc-cosine of the Z element will be the angle from vertical.

Results

All of the above code is checked into the AltOS git repository

I added a test mode to the firmware that just dumps out the current orientation over the USB link which lets you play with rotating the board to see how well the system tracks the current orientation. There’s a bit of gyro drift, as you’d expect, but overall, the system tracks the current orientation within less than a tenth of a degree per second.

Even with all of this computation added, the whole flight software is consuming less than 7% of the STM32L CPU time.

October 28, 2013 10:57 PM

October 25, 2013

FreakLabs

Freakduino Long Range Wireless Board WalkThrough - Part 2

In the first part of the walkthrough (index.php/Tutorials/Software/Freakduino-Wireless-Board-WalkThrough-Basic-Usage.html), we learned some basic operations and hello world type programs to get the 900 LR board up and running. In this walkthrough, we’ll be building on what we learned previously and moving on to more advanced topics like radio configuration and power management.Adding Commands to the Command LineIn the last section of the walkthrough, part 1, I introduced the cmdArduino command line library (index.php/Tutorials/Software/Tutorial-Using-CmdArduino.html). It allows you to make sketches interactive from...

October 25, 2013 03:29 AM

October 16, 2013

ZeptoBARS

Samsung SuperAMOLED : weekend die-shot

Samsung's SuperAMOLED display from Galaxy S4 mini is supposed to have active matrix (i.e. control transistors are on substrate) and integrated touch sensor. Let's take a look:
It seems there are at least 2 levels of barely visible interconnect (ITO?).


With few pixels glowing:


Only pixels glowing:


Half-pitch and thinnest lines are 2.5 µm. Diagonal die size is 109 mm :-)

October 16, 2013 08:32 PM

Richard Hughes, ColorHug

How to generate AppStream metadata for Fedora

I’m generating all the Fedora AppStream metadata by hand at the moment. Long term this is going to move to koji, but since we’re still tweaking the generator, adding features and fixing bugs it seems too early to fully integrate things. This is fine if you just care about the official Fedora sources, but a lot of people want to use applications from other less, ahem, free repos.

If you manage a repository and want to generate AppStream metadata yourself it’s really quite easy if you follow these instructions, although building the metadata can take a long time. Lets assume you run a site called MegaRpms and you want to target Fedora 20.

First, checkout the latest version of fedora-appstream and create somewhere we can store all the temporary files. You’ll want to do this on a SSD if possible.

$ mkdir megarpms
$ cd megarpms

Then create a project file with all the right settings for your repo. Lets assume you have two seporate trees, ‘megarpms’ and ‘meagarpms-updates’.

$ cat project.conf
[AppstreamProject]
DistroTag=f20
RepoIds=megarpms,megarpms-updates
DistroName=megarpms-20
ScreenshotMirrorUrl=http://www.megarpms.org/screenshots/

The screenshot mirror URL is required if you want to be able to host screenshots for applications. If you don’t want to (or can’t afford the hosting costs) then you can comment this out and no screenshots will be generated.

Then we can actually download the packages we need to extract. Ensure that both megarpms and megarpms-updates are enabled in /etc/yum.conf.d/ and then start downloading:

$ sudo ../fedora-download-cache.py

This requires root as it uses and updates the system metadata to avoid duplicating the caches you’ve probably already got. After all the interesting packages are downloaded you can do:

$ ../fedora-build-all.py

Now, go and make a cup of tea and wait patiently if you have a lot of packages to process. After this is complete you can do:

$ ../fedora-compose.py

This spits out megarpms-20.xml.gz and megarpms-20-icons.tar.gz — and you now have two choices what to do with these files. You can either upload them with the rest of the metadata you ship (e.g. in the same directory as repomd.xml and primary.sqlite.bz2) which will work with Fedora 21 and higher.
For Fedora 20, you have to actually install these files, so you can do something like this in the megarpms-release.spec file:

Source1: http://www.megarpms.org/temp/megarpms-20.xml.gz
Source2: http://www.megarpms.org/temp/megarpms-20-icons.tar.gz
mkdir -p %{buildroot}%{_datadir}/app-info/xmls
cp %{SOURCE1} %{buildroot}%{_datadir}/app-info/xmls
mkdir -p %{buildroot}%{_datadir}/app-info/icons/megarpms-20
tar xvzf %{SOURCE2}
cd -

This ensures that gnome-software can access both data files when starting up. If you have any other questions, concerns or patches, please get in touch. This is all very Fedora specific (rpm files, Yum API, various hardcoded package names) but if you’re interested in using fedora-appstream on your distro and want to actually do the work I’d welcome patches to make it less fedora-centric. SUSE generates the AppStream files in a completely different way.

by hughsie at October 16, 2013 06:23 PM

OggStreamer

#oggstreamer – PayAsYouWish Campaign Round 2

DONE

Welcome to the 2nd round of our PayAsYouWish OpenHardware campaign.

If you want to obtain an OggStreamer – we can now provide 5 pcs. to interested institutions (including schools, hackerspaces, radios and media labs) and individual developers (who work on similar projects or have an idea for a contribution to this project). Just write a small email to georg <at> otelo.or.at and explain what you want to do with the OggStreamer. All you have to pay for is shipping and make a donation of your choice for the “Open Technology Labratory Vöcklabruck” – If there is more demand than 5 pcs. we will choose from all applicants. Entries are welcome till 15th of October.

This time we got 5 applications … perfect :)

SONY DSC



by oggstreamer at October 16, 2013 02:03 PM

October 15, 2013

Richard Hughes, ColorHug

PackageKit service packs and catalogs

Does anyone actually use the PackageKit service pack or catalog functionality? If there are no users I’m intending to rip out the unused and unloved features from GNOME 3.12. Please say now, or forever hold your peace. Thanks.

by hughsie at October 15, 2013 01:08 PM

October 14, 2013

Andrew Zonenberg, Silicon Exposed

SoC framework, part 1: NoC overview and layer 1 structure

Those of you who have read my older posts may remember that I am currently pursuing a PhD in computer science at RPI. My research focus is the intersection of computer architecture and security, blurring classical distinctions between components in hopes of solving open problems in security. I'd go into more detail but I have to keep some surprises for my published papers ;)

As part of my research I am developing an FPGA-based SoC to test my theories. Existing frameworks and buses, such as AXI and Wishbone, lacked the flexibility I required so I had to create my own.

The first step was to forgo the classic shared-bus or crossbar topology in favor of a packet-switched network-on-chip (NoC). In order to keep the routing simple I elected to use a quadtree topology, with 16-bit routing addresses, for the network. This maps well to a spatially distributed system and should permit scaling to very large SoCs (up to 65536 IP cores per SoC are theoretically possible, though FPGA gate counts limit feasible designs to much smaller)

Example quadtree (from http://www.eecs.berkeley.edu/)
For the remainder of this post series I will use a slightly modified form of CIDR notation, as used with IP subnetting, to describe NoC addresses. For example, "8000/14" is the subnet with routing prefix 1000 0000 0000 00,  consisting of hexadecimal addresses 0x8000, 0x8001, 0x8002, and 0x8002. (Unlike IPv4 addressing, all addresses in the NoC are usable by hosts; there are no reserved broadcast addresses since all traffic is point to point.)

Each router has four downstream and one upstream ports. When a packet arrives at a router it checks if the packet is intended for its subnet; if so the next two bits control which downstream port it is forwarded out of. If the packet belongs to another subnet, it is sent out the upstream port.

Example NoC routing topology
As an example, if the host at 0x8001 wanted to send a message to the host at 0x8003, it would first reach the router for the 0x8000/14 subnet, The router checks the prefix, determines it to be a match, and then reads address bits 1:0 to determine that the packet should go out port 2'b11.

If 0x8001 were instead communicating with 0x8005, the router would instead forward the message out the upstream port. The router at 0x8000/12 would check address bits 3:2, determine that the packet is destined for port 2'b01, and forward to the destination router, which would then use bits 1:0 as the selector and forward out port 2'b01 to the final destination.

The actual network topology is slightly more complex than the diagram above implies, because my framework uses two independent networks, one for bulk data transfer and one for control-plane traffic. Thus, each line in the above diagram is actually four independent one-way links; two upstream and two downstream. Each link consists of a 32-bit data bus plus a few status bits. The actual protocol used will be described in the next post in this series.

by Andrew Zonenberg (noreply@blogger.com) at October 14, 2013 06:39 AM

SoC framework, part 2: layer 2/3 protocols

Introduction

This is the second post in a series on the SoC framework I'm developing for my research. I'm going to get into more interesting topics (such as my build/test framework and FPGA cluster) shortly, but to understand how all of the parts communicate it's necessary to understand the basics of the SoC interconnect.

I'm omitting some of the details of link-layer flow control and congestion handling for now as it's not necessary to understand the higher-level concepts. If anyone really wants to know the dirty details, comment and I'll do a post on it at some point in the future.

As I mentioned briefly in part 1 of the series, my interconnect actually consists of two independent networks with the same topology. The RPC network is intended for control-plane transactions and supports function call/return semantics (request followed by response) as well as interrupts (one-way datagrams). The DMA network is meant for bulk data transfers between cores and memory devices.

Layer-2 header

The layer-2 header is the same for both networks:
31:2423:1615:87:0
Source addressDest address

This is then followed by the layer-3 header for the protocol of interest. Which protocol is in use depends on the interface; the routers are optimized for one or the other. I may consider changing this in the future.

DMA network

Packet format

31:2423:0
Layer-2 header
OpcodePayload length in words (only rightmost 10 bits implemented)
Physical memory address
Zero or more application-layer data words

Protocol description

The DMA network is meant for bulk data transfers and is normally memory mapped when used by a CPU.

It supports read and write transactions of an integer number of 32-bit words, up to 512 data words plus three header words. This size was chosen so that a DMA transfer could transport an entire Ethernet frame or typical NAND page in one packet.

Byte write enables are not supported; it is expected that a CPU core requiring this functionality will use read-modify-write semantics inside the L1 cache and then move words (or cache lines containing several words) over the DMA network.

The physical DMA address space is 48 bits: each of the 2^16 possible cores in the SoC has 32 bits of address space. If one core requires more than 4GB of address space it may respond to several consecutive DMA addresses. CPU cores are expected to translate the 48-bit physical addresses into 32 or 64 bit virtual addresses as required by their microarchitecture.

Write transactions are unidirectional: a single packet with opcode set to "write request" is all that is required. The destination host may send an RPC interrupt back on success or failure of the write however this is not required by the layer 3 protocol. Specific application layer APIs may mandate write acknowledgements.

Read transactions are bidirectional: a "read request" packet with length set to the desired read size, and no data words, is sent. The response is a "read data" packet with the appropriate length and data fields. As with write transactions, failure interrupts are optional at layer 3 but typically required by application layer APIs.

RPC network

Packet format

31:2423:2120:0
Layer-2 header
CallnumTypeApplication-layer data
Application-layer data
Application-layer data

Protocol description

The RPC network is meant for small, low-latency control transfers and is normally register mapped when used by a CPU.

It supports fixed length packets of four word length so as to easily fit into standard register-based calling conventions.

The "callnum" field uniquely identifies the specific request / interrupt being performed. The meaning of this field is up to the application-layer protocol.

The "type" field can be one of the following:
  • Function call request
    The source host is requesting the destination host to perform some action. A response is required.
  • Function return (success)
    The source host has completed the requested action successfully. The application-layer protocol may specify a return value.
  • Function return (fail)
    The source host attempted the requested operation but could not complete it. The application-layer protocol may specify an error code.
  • Function return (retry)
    The source host is busy with a long-running operation and cannot complete the requested operation now, but might be able to in the future. The source host may re-send the request in the future or consider this to be a failure.
  • InterruptSomething interesting happened at the source host, and the destination host has previously requested to been notified when this happened.
  • Host prohibited
    Sent by a router to indicate that the destination host attempted to reach a host in violation of security policy. The source address of the packet is the prohibited address.
  • Host unreachable
    Sent by a router to indicate that the destination host attempted to reach a nonexistent address. The source address of the packet is the invalid address.

by Andrew Zonenberg (noreply@blogger.com) at October 14, 2013 06:39 AM

CMake, CTest, and CDash for Xilinx FPGAs, part 2

This is a follow-up to my post from yesterday. I've made major progress and, if I knew things would go this fast, wouldn't have written that post until today :)

The current version of the script is able to compile HDL designs to both FPGA bitstreams and ISim test cases, as well as running the simulation executable in the form of a unit test. There's no direct support for CPLDs yet (which will pretty much involve refactoring the code to call xst out into a function and adding some code to call cpldfit) but that will come soon.

Also on the to-do list:
  • Support for invoking PlanAhead in both pre-synthesis and post-PAR modes
  • Support for programming bitstreams to FPGAs and CPLDs using iMPACT via a "make program" type target
  • Support for indirect programming (need to generate ROM files etc)
  • Support for programming bitstreams to FPGAs and CPLDs using my JTAG toolchain (uses libftdi and the Digilent API as back ends, so I can integrate FT2232-based debug/program modules into my boards and not rely on the Xilinx platform cable)
  • Support for more command-line flags for the toolchain. Right now all of the ngdbuild/map/par/trce/bitgen flags are hard-coded and only about half of the default xst flags are changeable.
  • Support for mixed hardware/ISim/C++ cosimulation (using pipes and $fread/$fwrite to bridge to ISim and JTAG to bridge to real hardware)
Without further ado, here's a usage example for the major new feature:

########################################################################################################################
# Global synthesis flags

set(XILINX_FILTER_FILE ${CMAKE_CURRENT_SOURCE_DIR}/filter.filter)

set(XST_KEEP_HIERARCHY Soft)
set(XST_NETLIST_HIERARCHY Rebuilt)

########################################################################################################################
# Current top-level module
add_fpga_target(
OUTPUT
JtagTest
TOP_LEVEL
${CMAKE_CURRENT_SOURCE_DIR}/JtagTest.v
CONSTRAINTS
${CMAKE_SOURCE_DIR}/ucf/JtagTest.ucf
DEVICE
xc6slx45-3-csg324
SOURCES
${CMAKE_CURRENT_SOURCE_DIR}/debug/JtagDebugController.v
${CMAKE_CURRENT_SOURCE_DIR}/noc/common/NOCArbiter.v
${CMAKE_CURRENT_SOURCE_DIR}/noc/common/NOCRouterCore.v
${CMAKE_CURRENT_SOURCE_DIR}/noc/common/NOCRouterMux.v
${CMAKE_CURRENT_SOURCE_DIR}/noc/rpc/RPCRouter.v
${CMAKE_CURRENT_SOURCE_DIR}/noc/rpc/RPCRouterExitQueue.v
${CMAKE_CURRENT_SOURCE_DIR}/peripherals/NetworkedButtonArray.v
${CMAKE_CURRENT_SOURCE_DIR}/peripherals/NetworkedLEDBank.v
${CMAKE_CURRENT_SOURCE_DIR}/util/MediumBlockRamFifo.v
${CMAKE_CURRENT_SOURCE_DIR}/util/SwitchDebouncer.v
${CMAKE_CURRENT_SOURCE_DIR}/util/SwitchDebouncerBlock.v
${CMAKE_CURRENT_SOURCE_DIR}/util/ThreeStageSynchronizer.v
)

The add_fpga_target function uses the OUTPUT parameter as the base name for all of the temporary files created during compilation.

The TOP_LEVEL parameter specifies the top-level source file for the module. For now the base name of the TOP_LEVEL file is used as the top-level module name; in the future I may make the TOP_LEVEL parameter specify the module name and then add that file (along with all the others) to the SOURCES section.

DEVICE and SOURCES should be self-explanatory. Note that the Xilinx toolchain expects the part numbers in a specific format - there's a dash between the speed grade and the package (unlike the actual part numbers) and the temperature range is not specified.

Full source for this monster is below. Now that it's reached the point of basic usability I won't be blogging on it anymore except to announce the stable release on Google Code once I've worked out the rest of the kinks and bugs.

########################################################################################################################
# @file FindXilinx.cmake
# @author Andrew D. Zonenberg
# @brief Xilinx ISE toolchain CMake module
########################################################################################################################

########################################################################################################################
# Autodetect Xilinx paths (very hacky for now)

# TODO: Print messages only when configuring

# Find /opt/Xilinx or similar
find_file(XILINX_PARENT NAMES Xilinx PATHS /opt)
if(XILINX_PARENT STREQUAL "XILINX_PARENT-NOTFOUND")
message(FATAL_ERROR "No Xilinx toolchain installation found")
endif()

# Find /opt/Xilinx/VERSION
# TODO: Figure out a better way of doing this
find_file(XILINX NAMES 14.3 PATHS ${XILINX_PARENT})
if(XILINX STREQUAL "XILINX-NOTFOUND")
message(FATAL_ERROR "No ISE 14.3 installation found")
endif()
#message(STATUS "Found Xilinx toolchain... ${XILINX}")

# Set current OS architecture (TODO: autodetect)
set(XILINX_ARCH lin64)

# Find fuse
find_program(FUSE names fuse PATHS "${XILINX}/ISE_DS/ISE/bin/${XILINX_ARCH}/" NO_DEFAULT_PATH)
if(FUSE STREQUAL "FUSE-NOTFOUND")
message(FATAL_ERROR "No Xilinx fuse installation found")
endif()
#message(STATUS "Found Xilinx fuse... ${FUSE}")

# Find xst
find_file(XST NAMES xst PATHS "${XILINX}/ISE_DS/ISE/bin/${XILINX_ARCH}/")
if(XST STREQUAL "XST-NOTFOUND")
message(FATAL_ERROR "No Xilinx xst installation found")
endif()
#message(STATUS "Found Xilinx xst... ${XST}")

# Find ngdbuild
find_file(NGDBUILD NAMES ngdbuild PATHS "${XILINX}/ISE_DS/ISE/bin/${XILINX_ARCH}/")
if(NGDBUILD STREQUAL "NGDBUILD-NOTFOUND")
message(FATAL_ERROR "No Xilinx ngdbuild installation found")
endif()
#message(STATUS "Found Xilinx ngdbuild... ${NGDBUILD}")

# Find map
find_file(MAP NAMES map PATHS "${XILINX}/ISE_DS/ISE/bin/${XILINX_ARCH}/")
if(MAP STREQUAL "MAP-NOTFOUND")
message(FATAL_ERROR "No Xilinx map installation found")
endif()
#message(STATUS "Found Xilinx map... ${MAP}")

# Find par
find_file(PAR NAMES par PATHS "${XILINX}/ISE_DS/ISE/bin/${XILINX_ARCH}/")
if(PAR STREQUAL "PAR-NOTFOUND")
message(FATAL_ERROR "No Xilinx par installation found")
endif()
#message(STATUS "Found Xilinx par... ${PAR}")

# Find trce
find_file(TRCE NAMES trce PATHS "${XILINX}/ISE_DS/ISE/bin/${XILINX_ARCH}/")
if(TRCE STREQUAL "TRCE-NOTFOUND")
message(FATAL_ERROR "No Xilinx trce installation found")
endif()
#message(STATUS "Found Xilinx trce... ${TRCE}")

# Find bitgen
find_file(BITGEN NAMES bitgen PATHS "${XILINX}/ISE_DS/ISE/bin/${XILINX_ARCH}/")
if(BITGEN STREQUAL "BITGEN-NOTFOUND")
message(FATAL_ERROR "No Xilinx bitgen installation found")
endif()
#message(STATUS "Found Xilinx bitgen... ${BITGEN}")

########################################################################################################################
# Argument parsing helper

macro(xilinx_parse_args _output _top_level _ucf _device _sources)
set(${_top_level} FALSE)
set(${_output} FALSE)
set(${_ucf} FALSE)
set(${_device} FALSE)
set(${_sources})
set(_found_sources FALSE)
set(_found_device FALSE)
set(_found_output FALSE)
set(_found_ucf FALSE)
set(_found_top_level FALSE)
foreach(arg ${ARGN})
if(${arg} STREQUAL "TOP_LEVEL")
set(_found_top_level TRUE)
elseif(${arg} STREQUAL "SOURCES")
set(_found_sources TRUE)
elseif(${arg} STREQUAL "CONSTRAINTS")
set(_found_ucf TRUE)
elseif(${arg} STREQUAL "DEVICE")
set(_found_device TRUE)
elseif(${arg} STREQUAL "OUTPUT")
set(_found_output TRUE)
elseif(${_found_sources})
list(APPEND ${_sources} ${arg})
elseif(${_found_device})
if(${_device})
message(FATAL_ERROR "Multiple devices specified in xilinx_parse_args")
else()
set(${_device} ${arg})
endif()
elseif(${_found_ucf})
if(${_ucf})
message(FATAL_ERROR "Multiple constraint files specified in xilinx_parse_args")
else()
set(${_ucf} ${arg})
endif()
elseif(${_found_top_level})
if(${_top_level})
message(FATAL_ERROR "Multiple top-level files specified in xilinx_parse_args (${_top_level})")
else()
set(${_top_level} ${arg})
endif()
elseif(${_found_output})
if(${_output})
message(FATAL_ERROR "Multiple outputs specified in xilinx_parse_args")
else()
set(${_output} ${arg})
endif()
else()
message(FATAL_ERROR "Unrecognized command ${arg} in xilinx_parse_args")
endif()
endforeach()
endmacro()

########################################################################################################################
# Default flags for fuse
set(FUSE_FLAGS "-intstyle ise -incremental -lib unisims_ver -lib unimacro_ver -lib xilinxcorelib_ver -lib secureip")

########################################################################################################################
# ISim executable generation

function(add_isim_executable OUTPUT_FILE )

# Parse args
xilinx_parse_args(OUTFNAME TOP_LEVEL UCF DEVICE SOURCES ${ARGN})

# Get base name without extension of the top-level module
get_filename_component(TOPLEVEL_BASENAME ${TOP_LEVEL} NAME_WE )

# Write the .prj file
set(PRJ_FILE "${CMAKE_CURRENT_BINARY_DIR}/${OUTPUT_FILE}.prj")
file(WRITE ${PRJ_FILE} "verilog work \"${TOP_LEVEL}\"\n")
foreach(f ${SOURCES})
file(APPEND ${PRJ_FILE} "verilog work \"${f}\"\n")
endforeach()
file(APPEND ${PRJ_FILE} "verilog work \"${XILINX}/ISE_DS/ISE/verilog/src/glbl.v\"\n")

# Write the run-fuse wrapper script
set(FUSE_ERR_LOG "${CMAKE_CURRENT_BINARY_DIR}/${OUTPUT_FILE}_err.log")
set(FUSE_LOG "${CMAKE_CURRENT_BINARY_DIR}/${OUTPUT_FILE}_build.log")
set(FUSE_WRAPPER "${CMAKE_CURRENT_BINARY_DIR}/runfuse${OUTPUT_FILE}.sh")
file(WRITE ${FUSE_WRAPPER} "#!/bin/bash\n")
file(APPEND ${FUSE_WRAPPER} "cd ${CMAKE_CURRENT_BINARY_DIR}\n")
#file(APPEND ${FUSE_WRAPPER} "source ${XILINX}/ISE_DS/settings64.sh\n")
file(APPEND ${FUSE_WRAPPER} "${FUSE} ${FUSE_FLAGS} -o ${CMAKE_CURRENT_BINARY_DIR}/${OUTPUT_FILE} -prj ${PRJ_FILE}")
file(APPEND ${FUSE_WRAPPER} " work.${TOPLEVEL_BASENAME} work.glbl > ${FUSE_LOG} 2> ${FUSE_ERR_LOG}\n")
file(APPEND ${FUSE_WRAPPER} "if [ \"$?\" != \"0\" ]; then\n")
file(APPEND ${FUSE_WRAPPER} " cat ${FUSE_ERR_LOG} | grep \"ERROR\"\n")
file(APPEND ${FUSE_WRAPPER} " exit 1;\n")
file(APPEND ${FUSE_WRAPPER} "fi\n")
file(APPEND ${FUSE_WRAPPER} "exit 0;\n")
execute_process(COMMAND chmod +x ${FUSE_WRAPPER})

# Main compile rule
# TODO: tweak this
add_custom_target(
${OUTPUT_FILE} ALL
COMMAND ${FUSE_WRAPPER}
DEPENDS ${SOURCES} ${TOP_LEVEL}
COMMENT "Building ISim executable ${OUTPUT_FILE}..."
)

# Write the tcl script
set(TCL_FILE "${CMAKE_CURRENT_BINARY_DIR}/${OUTPUT_FILE}.tcl")
file(WRITE ${TCL_FILE} "onerror {resume}\n")
file(APPEND ${TCL_FILE} "wave add /\n")
file(APPEND ${TCL_FILE} "run 1000 ns;\n")
file(APPEND ${TCL_FILE} "exit;\n")

# Write the run-test wrapper script
set(TEST_WRAPPER "${CMAKE_CURRENT_BINARY_DIR}/run${OUTPUT_FILE}.sh")
file(WRITE ${TEST_WRAPPER} "#!/bin/bash\n")
file(APPEND ${TEST_WRAPPER} "cd ${CMAKE_CURRENT_BINARY_DIR}\n")
file(APPEND ${TEST_WRAPPER} "source ${XILINX}/ISE_DS/settings64.sh\n")
file(APPEND ${TEST_WRAPPER} "./${OUTPUT_FILE} -tclbatch ${TCL_FILE} -intstyle silent -vcdfile ${OUTPUT_FILE}.vcd -vcdunit ps || exit 1\n")
file(APPEND ${TEST_WRAPPER} "cat isim.log | grep -q FAIL\n")
file(APPEND ${TEST_WRAPPER} "if [ \"$?\" != \"1\" ]; then\n")
file(APPEND ${TEST_WRAPPER} " exit 1;\n")
file(APPEND ${TEST_WRAPPER} "fi\n")
execute_process(COMMAND chmod +x ${TEST_WRAPPER})

endfunction()

########################################################################################################################
# Test generation
#
# Usage:
# add_isim_test(NandGate
# TOP_LEVEL
# ${CMAKE_CURRENT_SOURCE_DIR}/testNandGate.v
# SOURCES
# ${CMAKE_SOURCE_DIR}/hdl/NandGate.v
# )

function(add_isim_test TEST_NAME)

# Parse args
xilinx_parse_args(OUTPUT TOP_LEVEL UCF DEVICE SOURCES ${ARGN})

# Add the sim executable
add_isim_executable(test${TEST_NAME}
TOP_LEVEL
${TOP_LEVEL}
SOURCES
${SOURCES}
)

add_test(${TEST_NAME}
"${CMAKE_CURRENT_BINARY_DIR}/runtest${TEST_NAME}.sh")
set_property(TEST ${TEST_NAME} APPEND PROPERTY DEPENDS test${TEST_NAME})


endfunction()

########################################################################################################################
# Default flags for Xilinx toolchain

# Compiler flags
set(XST_MAX_FANOUT 100000)
set(XST_OPT_MODE Speed)
set(XST_OPT_LEVEL 1)
set(XST_KEEP_HIERARCHY No)
set(XST_NETLIST_HIERARCHY As_Optimized)
set(XST_RESOURCE_SHARING Yes)
set(XST_RAM_EXTRACT Yes)
set(XST_SHREG_MIN_SIZE 2)
set(XST_REGISTER_BALANCING No)

set(XILINX_FILTER_FILE FALSE)

########################################################################################################################
# Xilinx FPGA bitstream generation

function(add_fpga_target)

# Parse args
xilinx_parse_args(OUTFNAME TOP_LEVEL UCF DEVICE SOURCES ${ARGN})

# Get base name without extension of the top-level module
get_filename_component(TOPLEVEL_BASENAME ${TOP_LEVEL} NAME_WE )

# Set the filter flag
SET(XILINX_FILTER_FLAG "")
if(XILINX_FILTER_FILE)
SET(XILINX_FILTER_FLAG "-filter ${XILINX_FILTER_FILE}")
ENDIF()

# Write the .prj file
set(PRJ_FILE "${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}.prj")
file(WRITE ${PRJ_FILE} "verilog work \"${TOP_LEVEL}\"\n")
foreach(f ${SOURCES})
file(APPEND ${PRJ_FILE} "verilog work \"${f}\"\n")
endforeach()
file(APPEND ${PRJ_FILE} "verilog work \"${XILINX}/ISE_DS/ISE/verilog/src/glbl.v\"\n")

# Create the XST input script
set(XST_DIR "${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_xst")
file(MAKE_DIRECTORY ${XST_DIR})
set(XST_TMPDIR "${XST_DIR}/projnav.tmp")
file(MAKE_DIRECTORY ${XST_TMPDIR})
set(XST_SCRIPT_FILE "${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}.xst")
set(XST_SYR_FILE "${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}.syr")
file(WRITE ${XST_SCRIPT_FILE} "set -tmpdir \"${XST_TMPDIR}\"\n")
file(APPEND ${XST_SCRIPT_FILE} "set -xsthdpdir ${XST_DIR}\n")
file(APPEND ${XST_SCRIPT_FILE} "run\n")
file(APPEND ${XST_SCRIPT_FILE} "-ifn ${PRJ_FILE}\n")
file(APPEND ${XST_SCRIPT_FILE} "-ofn ${OUTFNAME}\n")
file(APPEND ${XST_SCRIPT_FILE} "-ofmt NGC\n")
file(APPEND ${XST_SCRIPT_FILE} "-p ${DEVICE}\n")
file(APPEND ${XST_SCRIPT_FILE} "-top ${TOPLEVEL_BASENAME}\n")
file(APPEND ${XST_SCRIPT_FILE} "-slice_utilization_ratio 100\n")
file(APPEND ${XST_SCRIPT_FILE} "-bram_utilization_ratio 100\n")
file(APPEND ${XST_SCRIPT_FILE} "-dsp_utilization_ratio 100\n")
file(APPEND ${XST_SCRIPT_FILE} "-bufg 16\n")
file(APPEND ${XST_SCRIPT_FILE} "-hierarchy_separator /\n")
file(APPEND ${XST_SCRIPT_FILE} "-bus_delimiter <>\n")
file(APPEND ${XST_SCRIPT_FILE} "-case Maintain\n")
file(APPEND ${XST_SCRIPT_FILE} "-max_fanout ${XST_MAX_FANOUT}\n")
file(APPEND ${XST_SCRIPT_FILE} "-opt_mode ${XST_OPT_MODE}\n")
file(APPEND ${XST_SCRIPT_FILE} "-opt_level ${XST_OPT_LEVEL}\n")
file(APPEND ${XST_SCRIPT_FILE} "-keep_hierarchy ${XST_KEEP_HIERARCHY}\n")
file(APPEND ${XST_SCRIPT_FILE} "-netlist_hierarchy ${XST_NETLIST_HIERARCHY}\n")
file(APPEND ${XST_SCRIPT_FILE} "-resource_sharing ${XST_RESOURCE_SHARING}\n")
file(APPEND ${XST_SCRIPT_FILE} "-ram_extract ${XST_RAM_EXTRACT}\n")
file(APPEND ${XST_SCRIPT_FILE} "-shreg_min_size ${XST_SHREG_MIN_SIZE}\n")
file(APPEND ${XST_SCRIPT_FILE} "-register_balancing ${XST_REGISTER_BALANCING}\n")

#-power NO
#-iuc NO
#-rtlview Yes
#-glob_opt AllClockNets
#-read_cores YES
#-write_timing_constraints NO
#-cross_clock_analysis NO
#-lc Auto
#-reduce_control_sets Auto
#-fsm_extract YES -fsm_encoding Auto
#-safe_implementation No
#-fsm_style LUT
#-ram_style Auto
#-rom_extract Yes
#-shreg_extract YES
#-rom_style Auto
#-auto_bram_packing NO
#-async_to_sync NO
#-use_dsp48 Auto
#-iobuf YES
#-register_duplication YES
#-optimize_primitives NO
#-use_clock_enable Auto
#-use_sync_set Auto
#-use_sync_reset Auto
#-iob Auto
#-equivalent_register_removal YES
#-slice_utilization_ratio_maxmargin 5

# Create the run-XST script
set(XST_BUILD_LOG "${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_xst.log")
set(XST_RUN_SCRIPT "${CMAKE_CURRENT_BINARY_DIR}/runXST_${OUTFNAME}.sh")
file(WRITE ${XST_RUN_SCRIPT} "#!/bin/bash\n")
file(APPEND ${XST_RUN_SCRIPT} "${XST} -intstyle xflow ${XILINX_FILTER_FLAG} -ifn ${XST_SCRIPT_FILE} -ofn ${XST_SYR_FILE} > ${XST_BUILD_LOG}\n")
file(APPEND ${XST_RUN_SCRIPT} "if [ \"$?\" != \"0\" ]; then\n")
file(APPEND ${XST_RUN_SCRIPT} " cat ${XST_BUILD_LOG} | grep \"ERROR\"\n")
file(APPEND ${XST_RUN_SCRIPT} " exit 1;\n")
file(APPEND ${XST_RUN_SCRIPT} "fi\n")
file(APPEND ${XST_RUN_SCRIPT} "cat ${XST_SYR_FILE} | grep \"WARNING\"\n")
file(APPEND ${XST_RUN_SCRIPT} "exit 0;\n")
execute_process(COMMAND chmod +x ${XST_RUN_SCRIPT})

# Synthesize
set(NGC_FILE "${OUTFNAME}.ngc")
add_custom_command(
OUTPUT ${NGC_FILE}
COMMAND ${XST_RUN_SCRIPT}
DEPENDS ${SOURCES} ${TOP_LEVEL} ${UCF}
COMMENT "Synthesizing NGC object ${NGC_FILE}"
)

# Create the run-NGDBUILD script
set(NGDBUILD_LOG "${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_ngdbuild.log")
set(NGDBUILD_RUN_SCRIPT "${CMAKE_CURRENT_BINARY_DIR}/runNGDBUILD_${OUTFNAME}.sh")
set(NGDBUILD_BLD_FILE "${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}.bld")
file(WRITE ${NGDBUILD_RUN_SCRIPT} "#!/bin/bash\n")
file(APPEND ${NGDBUILD_RUN_SCRIPT} "${NGDBUILD} -intstyle ise ${XILINX_FILTER_FLAG} -dd _ngo -nt timestamp -uc ${UCF} -p ${DEVICE} ${NGC_FILE} ${NGD_FILE} > ${NGDBUILD_LOG}\n")
file(APPEND ${NGDBUILD_RUN_SCRIPT} "if [ \"$?\" != \"0\" ]; then\n")
file(APPEND ${NGDBUILD_RUN_SCRIPT} " cat ${NGDBUILD_LOG} | grep \"ERROR\"\n")
file(APPEND ${NGDBUILD_RUN_SCRIPT} " exit 1;\n")
file(APPEND ${NGDBUILD_RUN_SCRIPT} "fi\n")
file(APPEND ${NGDBUILD_RUN_SCRIPT} "cat ${NGDBUILD_BLD_FILE} | grep \"WARNING\"\n")
file(APPEND ${NGDBUILD_RUN_SCRIPT} "exit 0;\n")
execute_process(COMMAND chmod +x ${NGDBUILD_RUN_SCRIPT})

# Translate
set(NGD_FILE "${OUTFNAME}.ngd")
set(PCF_FILE "${OUTFNAME}.pcf")
add_custom_command(
OUTPUT ${NGD_FILE}
COMMAND ${NGDBUILD_RUN_SCRIPT}
DEPENDS ${UCF} ${NGC_FILE}
COMMENT "Translating NGD object ${NGD_FILE}"
)

# Create the run-MAP script
set(MAP_NCD_FILE "${OUTFNAME}_map.ncd")
set(MAP_LOG "${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_map.log")
set(MAP_MRP_FILE "${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_map.mrp")
set(MAP_RUN_SCRIPT "${CMAKE_CURRENT_BINARY_DIR}/runMAP_${OUTFNAME}.sh")
file(WRITE ${MAP_RUN_SCRIPT} "#!/bin/bash\n")
file(APPEND ${MAP_RUN_SCRIPT} "${MAP} -intstyle ise -p ${DEVICE} -w ${XILINX_FILTER_FLAG} -logic_opt off -ol high -t 1 -xt 0 -register_duplication off -r 4 -global_opt off -mt 2 -ir off -pr off -lc off -power off -o ${MAP_NCD_FILE} ${NGD_FILE} ${PCF_FILE} > ${MAP_LOG}\n")
file(APPEND ${MAP_RUN_SCRIPT} "if [ \"$?\" != \"0\" ]; then\n")
file(APPEND ${MAP_RUN_SCRIPT} " cat ${MAP_LOG} | grep \"ERROR\"\n")
file(APPEND ${MAP_RUN_SCRIPT} " exit 1;\n")
file(APPEND ${MAP_RUN_SCRIPT} "fi\n")
file(APPEND ${MAP_RUN_SCRIPT} "cat ${MAP_MRP_FILE} | grep \"WARNING\"\n")
file(APPEND ${MAP_RUN_SCRIPT} "exit 0;\n")
execute_process(COMMAND chmod +x ${MAP_RUN_SCRIPT})

# Map
add_custom_command(
OUTPUT ${MAP_NCD_FILE}
COMMAND ${MAP_RUN_SCRIPT}
DEPENDS ${UCF} ${NGD_FILE}
COMMENT "Mapping native circuit description ${MAP_NCD_FILE}"
)

# Create the run-PAR script
set(NCD_FILE "${OUTFNAME}.ncd")
set(PAR_LOG "${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_par.log")
set(PAR_RUN_SCRIPT "${CMAKE_CURRENT_BINARY_DIR}/runPAR_${OUTFNAME}.sh")
set(PAR_PAR_FILE "${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}.par")
file(WRITE ${PAR_RUN_SCRIPT} "#!/bin/bash\n")
file(APPEND ${PAR_RUN_SCRIPT} "${PAR} -w -intstyle ise ${XILINX_FILTER_FLAG} -ol high -mt 4 ${MAP_NCD_FILE} ${NCD_FILE} ${PCF_FILE} > ${PAR_LOG}\n")
file(APPEND ${PAR_RUN_SCRIPT} "if [ \"$?\" != \"0\" ]; then\n")
file(APPEND ${PAR_RUN_SCRIPT} " cat ${PAR_LOG} | grep \"ERROR\"\n")
file(APPEND ${PAR_RUN_SCRIPT} " exit 1;\n")
file(APPEND ${PAR_RUN_SCRIPT} "fi\n")
file(APPEND ${PAR_RUN_SCRIPT} "cat ${PAR_PAR_FILE} | grep \"WARNING\"\n")
file(APPEND ${PAR_RUN_SCRIPT} "exit 0;\n")
execute_process(COMMAND chmod +x ${PAR_RUN_SCRIPT})

# PAR
add_custom_command(
OUTPUT ${NCD_FILE}
COMMAND ${PAR_RUN_SCRIPT}
DEPENDS ${UCF} ${MAP_NCD_FILE}
COMMENT "Place and route native circuit description ${NCD_FILE}"
)

# Create the run-trce script
set(TWX_FILE "${OUTFNAME}.twx")
set(TWR_FILE "${OUTFNAME}.twr")
set(TRCE_LOG "${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_trce.log")
set(TRCE_RUN_SCRIPT "${CMAKE_CURRENT_BINARY_DIR}/runTRCE_${OUTFNAME}.sh")
file(WRITE ${TRCE_RUN_SCRIPT} "#!/bin/bash\n")
file(APPEND ${TRCE_RUN_SCRIPT} "${TRCE} -intstyle ise -v 3 -s 2 -n 3 ${XILINX_FILTER_FLAG} -fastpaths -xml ${TWX_FILE} ${NCD_FILE} -o ${TWR_FILE} ${PCF_FILE} -ucf ${UCF} > ${TRCE_LOG}\n")
file(APPEND ${TRCE_RUN_SCRIPT} "if [ \"$?\" != \"0\" ]; then\n")
file(APPEND ${TRCE_RUN_SCRIPT} " cat ${TRCE_LOG} | grep \"ERROR\"\n")
file(APPEND ${TRCE_RUN_SCRIPT} " exit 1;\n")
file(APPEND ${TRCE_RUN_SCRIPT} "fi\n")
file(APPEND ${TRCE_RUN_SCRIPT} "cat ${TWR_FILE} | grep \"0 timing errors detected\" > /dev/null\n")
file(APPEND ${TRCE_RUN_SCRIPT} "if [ \"$?\" != \"0\" ]; then\n")
file(APPEND ${TRCE_RUN_SCRIPT} " cat ${TWR_FILE} | grep \"paths analyzed\"\n")
file(APPEND ${TRCE_RUN_SCRIPT} " cat ${TWR_FILE} | grep \"timing errors detected\"\n")
file(APPEND ${TRCE_RUN_SCRIPT} " cat ${TWR_FILE} | grep \"Minimum period is\"\n")
file(APPEND ${TRCE_RUN_SCRIPT} " cat ${TWR_FILE} | grep \"Score\"\n")
file(APPEND ${TRCE_RUN_SCRIPT} " exit 1;\n")
file(APPEND ${TRCE_RUN_SCRIPT} "fi\n")
execute_process(COMMAND chmod +x ${TRCE_RUN_SCRIPT})

# TRCE
add_custom_command(
OUTPUT ${TWR_FILE}
COMMAND ${TRCE_RUN_SCRIPT}
DEPENDS ${UCF} ${NCD_FILE}
COMMENT "Generate static timing analysis ${TWR_FILE}"
)

# Create the bitgen input script
set(BITGEN_SCRIPT_FILE "${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}.ut")
set(BIT_FILE "${OUTFNAME}.bit")
file(WRITE ${BITGEN_SCRIPT_FILE} "-w\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g DebugBitstream:No\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g Binary:No\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g CRC:Enable\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g Reset_on_err:No\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g ConfigRate:2\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g ProgPin:PullUp\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g TckPin:PullUp\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g TdiPin:PullUp\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g TdoPin:PullUp\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g TmsPin:PullUp\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g UnusedPin:PullDown\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g UserID:0xFFFFFFFF\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g ExtMasterCclk_en:No\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g SPI_buswidth:1\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g TIMER_CFG:0xFFFF\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g multipin_wakeup:No\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g StartUpClk:CClk\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g DONE_cycle:4\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g GTS_cycle:5\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g GWE_cycle:6\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g LCK_cycle:NoWait\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g Security:None\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g DonePipe:Yes\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g DriveDone:Yes\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g en_sw_gsr:No\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g drive_awake:No\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g sw_clk:Startupclk\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g sw_gwe_cycle:5\n")
file(APPEND ${BITGEN_SCRIPT_FILE} "-g sw_gts_cycle:4\n")

# Create the run-bitgen script
set(BITGEN_LOG "${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_bitgen.log")
set(BITGEN_RUN_SCRIPT "${CMAKE_CURRENT_BINARY_DIR}/runBITGEN_${OUTFNAME}.sh")
set(BITGEN_BGN_FILE "${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}.bgn")
file(WRITE ${BITGEN_RUN_SCRIPT} "#!/bin/bash\n")
file(APPEND ${BITGEN_RUN_SCRIPT} "${BITGEN} -intstyle ise ${XILINX_FILTER_FLAG} -f ${BITGEN_SCRIPT_FILE} ${NCD_FILE} > ${BITGEN_LOG}\n")
file(APPEND ${BITGEN_RUN_SCRIPT} "if [ \"$?\" != \"0\" ]; then\n")
file(APPEND ${BITGEN_RUN_SCRIPT} " cat ${BITGEN_LOG} | grep \"ERROR\"\n")
file(APPEND ${BITGEN_RUN_SCRIPT} " exit 1;\n")
file(APPEND ${BITGEN_RUN_SCRIPT} "fi\n")
file(APPEND ${BITGEN_RUN_SCRIPT} "cat ${BITGEN_BGN_FILE} | grep \"WARNING\"\n")
file(APPEND ${BITGEN_RUN_SCRIPT} "exit 0;\n")
execute_process(COMMAND chmod +x ${BITGEN_RUN_SCRIPT})

# BITGEN
# Must depend on trce in order for timing failure to prevent bitgen from running
add_custom_target(
${OUTFNAME} ALL
COMMAND ${BITGEN_RUN_SCRIPT}
DEPENDS ${NCD_FILE} ${TWR_FILE}
COMMENT "Generate FPGA bitstream ${BIT_FILE}"
SOURCES ${NCD_FILE} ${TWR_FILE}
)

# Add additional make-clean files
# Do not delete run scripts or toolchain input files, only outputs
set_property(
DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
APPEND PROPERTY ADDITIONAL_MAKE_CLEAN_FILES
${XST_SYR_FILE}
${XST_BUILD_LOG}
${NGDBUILD_LOG}
${NGDBUILD_BLD_FILE}
${PCF_FILE}
${MAP_LOG}
${MAP_MRP_FILE}
${PAR_LOG}
${PAR_PAR_FILE}
${TWX_FILE}
${TRCE_LOG}
${BITGEN_LOG}
${BIT_FILE}
${BITGEN_BGN_FILE}
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}.lso"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}.map"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_map.map"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_map.ngm"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_map.xrpt"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_ngdbuild.xrpt"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}.ngm"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}.pad"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_pad.csv"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_pad.txt"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_par.xrpt"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}.ptwx"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_summary.xml"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}.unroutes"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_usage.xml"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}.xpi"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_xst.xrpt"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}_bitgen.xwbt"
"${CMAKE_CURRENT_BINARY_DIR}/${OUTFNAME}.drc"
"${CMAKE_CURRENT_BINARY_DIR}/usage_statistics_webtalk.html"
"${CMAKE_CURRENT_BINARY_DIR}/webtalk.log"
"${CMAKE_CURRENT_BINARY_DIR}/par_usage_statistics.html"
)

endfunction()

# TODO: planAhead
#planAhead -ise yes -m64 -log planAhead.log -journal planAhead.jou -source pa.fromNcd.tcl

#pa.fromHdl.tcl (pre-synthesis)
#create_project -name lx9-lvds-ioexpander -dir "/home/azonenberg/native/programming/verilogpractice/lx9-lvds-ioexpander/planAhead_run_1" -part xc6slx9tqg144-2
#set_param project.pinAheadLayout yes
#set srcset [get_property srcset [current_run -impl]]
#set_property target_constrs_file "TopLevel.ucf" [current_fileset -constrset]
#set hdlfile [add_files [list {TopLevel.v}]]
#set_property file_type Verilog $hdlfile
#set_property library work $hdlfile
#set_property top TopLevel $srcset
#add_files [list {TopLevel.ucf}] -fileset [get_property constrset [current_run]]
#open_rtl_design -part xc6slx9tqg144-2

#pa.fromNcd.tcl contents (post-PAR)
#create_project -name lx9-lvds-ioexpander -dir "/home/azonenberg/native/programming/verilogpractice/lx9-lvds-ioexpander/planAhead_run_1" -part xc6slx9tqg144-2
#set srcset [get_property srcset [current_run -impl]]
#set_property design_mode GateLvl $srcset
#set_property edif_top_file "/home/azonenberg/native/programming/verilogpractice/lx9-lvds-ioexpander/TopLevel.ngc" [ get_property srcset [ current_run ] ]
#add_files -norecurse { {/home/azonenberg/native/programming/verilogpractice/lx9-lvds-ioexpander} }
#set_property target_constrs_file "TopLevel.ucf" [current_fileset -constrset]
#add_files [list {TopLevel.ucf}] -fileset [get_property constrset [current_run]]
#link_design
#read_xdl -file "/home/azonenberg/native/programming/verilogpractice/lx9-lvds-ioexpander/TopLevel.ncd"
#if {[catch {read_twx -name results_1 -file "/home/azonenberg/native/programming/verilogpractice/lx9-lvds-ioexpander/TopLevel.twx"} eInfo]} {
# puts "WARNING: there was a problem importing \"/home/azonenberg/native/programming/verilogpractice/lx9-lvds-ioexpander/TopLevel.twx\": $eInfo"
#}

by Andrew Zonenberg (noreply@blogger.com) at October 14, 2013 06:38 AM

SoC framework, part 3: libjtaghal

Almost all of my embedded development and debugging makes heavy use of JTAG, both for loading new bitstreams/firmware images and for interacting with on-chip debug systems.

When I first got into FPGA development I used the Xilinx Platform Cable USB II, which sells for $258.75 on Digikey as of this writing. It integrated nicely with the Xilinx IDE but I quickly grew frustrated. I wanted to use the BSCAN_SPARTAN6 primitive in the FPGA to move debug data on and off the FPGA using JTAG, but Xilinx does not provide any sort of API for scripting the platform cable. Although iMPACT allows manual bit twiddling in the chain as well as executing pre-made SVF files, there is no way to do interactive testing with it

My first step in deciding how to proceed was to see what made their adapter tick. I would have opened up the adapter to see what was inside, but Bunnie saved me the trouble by posting pictures a while ago as the Name That Ware for March 2011.

Xilinx Platform Cable USB II (image courtesy of Bunnie)
The vast majority of the footprints on the board aren't even populated... one can only guess what additional functionality may have been planned at one point. There's an XC3S200A FPGA, a Cypress USB MCU, USB descriptor EEPROM, flash for the FPGA, and then a bunch of passives for power regulation and level shifting. Overall, the design is quite simple and certainly not worth $250.

After browsing for something cheaper and based on a well-known chipset, I found the Digilent HS1, a $54.99 FT2232-based adapter which is supported by Digilent's documented JTAG API and integrated nicely with the Xilinx IDE. In addition, since it's a standard FTDI chipset it would be possible to interact with it at a lower level using libftd2xx. (I also built a custom FT232H-based programmer that I have half a dozen of around my lab, but I wanted a known-good design to verify my software on first.)

The HS1 worked quite well using the Xilinx tools, but I still needed JTAG code to talk to it and interact with the FPGAs for scripted tests. I looked at a couple of popular options and rejected each of them:
  • OpenOCD (GNU GPL, incompatible with the BSD license used by my work)
  • xc3sprog (GPL, standalone tool with no API, includes programming algorithms that can run directly off a .bit file)
  • urjtag (GPL, has a socket-based JTAG server under development but not released yet)
It looked like I was going to have to write my own software, so I sat down and did just that. The result was a C++ library I call libjtaghal (JTAG hardware abstraction layer). It will be released publicly under the 3-clause BSD license once I've cleaned it up a bit; in the meantime if anyone wants a raw code drop with no documentation and a not-quite-finished build system leave a comment and I'll post something.

The basic structure of libjtaghal is built around two core object types: interfaces and devices. A JtagInterface represents a connection to a single JTAG adapter. As of now I support:
  • FT*232H MPSSE (assumes ADBUS7 is the output enable, an option to configure this is planned for the future)
  • Digilent API (for HS1 and integrated programmers on the Atlys etc)
  • Generic socket-based protocol for talking to remote libjtaghal servers
My custom 8-port JTAG system (more to follow in a future post) will use my socket-based protocol and show up as 8 separate interfaces which can each be controlled independently (potentially from 8 separate client PCs).

 A JtagDevice represents a single chip in a scan chain. Support for multi-device scan chains needs a bit more work; this is one of the reasons I haven't released it yet.

A given JtagDevice may implement one or more additional interfaces. Some of these are:
  • CPLD (generic complex programmable logic device)
  • FPGA (generic FPGA device)
  • ProgrammableDevice (any device which accepts firmware of some sort, including CPLDs, FPGAs, MCUs, and JTAG-capable ROMs)
  • RPCNetworkInterface (a device which supports sending RPC messages over JTAG)
  • DMANetworkInterface (a device which supports sending DMA messages over JTAG)
  • RPCAndDMANetworkInterface (implements RPCNetworkInterface, DMANetworkInterface, and some logic to connect the two protocols)
This design allows several very handy design abstractions. For example, the below code is the sum total of the "program" mode for my "jtagclient" command-line application. It takes a JtagInterface object "iface" and programs the device at chain index "devnum" with the firmware image "bitfile". Note the complete lack of any device- or interface-specific code. The same function can configure a CoolRunner-II via one of my custom FTDI programmers or a Spartan-6 using the integrated Digilent programmer on a dev board without changing anything.

JtagDevice* device = iface.GetDevice(devnum);
if(device == NULL)
{
throw JtagExceptionWrapper(
"Device is null, cannot continue",
"",
JtagException::EXCEPTION_TYPE_BOARD_FAULT);
}

//Make sure it's a programmable device
ProgrammableDevice* pdev = dynamic_cast(device);
if(pdev == NULL)
{
throw JtagExceptionWrapper(
"Device is not a programmable device, cannot continue",
"",
JtagException::EXCEPTION_TYPE_BOARD_FAULT);
}

//Load the firmware image and program the device
printf("Loading firmware image...\n");
FirmwareImage* img = pdev->LoadFirmwareImage(bitfile);
printf("Programming device...\n");
pdev->Program(img);
printf("Configuration successful\n");
delete img;

This is part of the test case for my gigabit Ethernet MAC, allocating a page of memory on the device under test by talking to the RAM controller at NoC address "raddr" via the RPCNetworkInterface "iface". (Details on how this is implemented will be coming in a few posts.)

printf("Allocating memory...\n");
iface.RPCFunctionCall(raddr, RAM_ALLOCATE, 0, 0, 0, rxm);
uint32_t txptr = rxm.data[1];
printf(" Transmit buffer is at 0x%08x\n", txptr);

There's a lot more to the system than this but I'll save the rest for my next post :)

by Andrew Zonenberg (noreply@blogger.com) at October 14, 2013 06:35 AM

SoC framework, part 4: jtagd

As I mentioned in my previous post, libjtaghal supports a socket-based protocol for communicating with JTAG adapters. This allows some very powerful capabilities, for example sharing a single dev board among multiple developers.

The core of this is a TCP server written in C++ known as jtagd. So far I have tested it on Debian 7 on both x86_64-linux-gnu and arm-linux-gnueabihf architectures (laptop computer and Beaglebone Black).

The main jtagd executable connects to a JtagInterface object and bridges it out to a TCP socket using my custom protocol. The protocol is not 100% finalized at this point, several features (like a magic-number banner to verify the client is actually talking to a valid jtagd and not a mistyped port number) will be added before a public release.

Starting a jtagd is quite simple: run "jtagd --list" to see what interfaces are available, then connect to one of them.

azonenberg@mars$ jtagd --list
JTAG server daemon [SVN rev 1230M] by Andrew D. Zonenberg.

License: 3-clause ("new" or "modified") BSD.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Digilent API version: 2.9.3
Enumerating interfaces... 2 found
Interface 0: JtagSmt1
Serial number: SN:210203825011
User ID: JtagSmt1
Default clock: 10.00 MHz
Interface 1: JtagHs1
Serial number: SN:210205812611
User ID: JtagHs1
Default clock: 10.00 MHz

FTDI API version: libftd2xx 1.1.4
Enumerating interfaces... 16 found
Interface 0: Digilent Adept USB Device A
Serial number: 210203825011A
User ID: 210203825011A
Default clock: 10.00 MHz
[[ Output trimmed for brevity ]]
Interface 10: Dev Board JTAG
Serial number: FTWB6M0W
User ID: FTWB6M0W
Default clock: 10.00 MHz
 
azonenberg@mars$ jtagd --api ftdi --serial 210203825011A --port 50200
JTAG server daemon [SVN rev 1230M] by Andrew D. Zonenberg.

License: 3-clause ("new" or "modified") BSD.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Connected to interface "Digilent Adept USB Device A (2232H)" (serial number "210203825011A")

Once the jtagd is running, you can connect to it using command-line tools such as jtagclient, or directly from C code using libjtaghal. The example here connects to a Digilent Atlys and verifies the device ID of the XC6SLX45 FPGA.

NetworkedJtagInterface iface;
iface.Connect(server, port);

//note use of RAII-style mutexing
//since jtagd is multi-client capable
JtagLock lock(m_iface);
m_iface->InitializeChain();

int ndev = m_iface->GetDeviceCount();
if(ndev == 0)
{
throw JtagExceptionWrapper(
"No devices found - invalid scan chain?",
"",
JtagException::EXCEPTION_TYPE_BOARD_FAULT);
}

//Verify that the board is an Atlys
//Should have a single XC6SLX45
XilinxSpartan6Device* pfpga = dynamic_cast<XilinxSpartan6Device*>(m_iface->GetDevice(0));
if(pfpga == NULL)
{
throw JtagExceptionWrapper(
"Device does not appear to be a Spartan-6",
"",
JtagException::EXCEPTION_TYPE_BOARD_FAULT);
}
if(pfpga->GetArraySize() != XilinxSpartan6Device::SPARTAN6_LX45)
{
throw JtagExceptionWrapper(
"Device is not an XC6SLX45",
"",
JtagException::EXCEPTION_TYPE_BOARD_FAULT);
}

The library internally uses low-level chain operations in order to talk to the device. The code below retrieves the "Device DNA" die serial number from a Spartan-6.

void XilinxSpartan6Device::GetSerialNumber(unsigned char* data)
{
JtagLock lock(m_iface);

Erase();

//Enter ISC mode (wipes configuration)
ResetToIdle();
SetIR(INST_ISC_ENABLE);

//Read the DNA value
SetIR(INST_ISC_DNA);
unsigned char zeros[8] = {0x00};
ScanDR(zeros, data, 57);

//Done
SetIR(INST_ISC_DISABLE);
}

Stay tuned for my next post on nocswitch and the NoC-to-JTAG debug bridge :)

by Andrew Zonenberg (noreply@blogger.com) at October 14, 2013 06:35 AM

SoC framework, part 5: JtagDebugController and nocswitch

All of the JTAG utilities I've been mentioning are quite handy if you need to load a bitstream onto a board from one of several workstations. But JTAG is capable of much more, including powerful on-chip debug features.

One of the often-overlooked hard IP blocks in Xilinx FPGAs is BSCAN. This primitive (usually described in the FPGA's configuration user guide) connects a JTAG data register for certain special instructions to FPGA fabric.

Xilinx 6 and 7 series FPGAs each contain four BSCANs, one connected to each of the four JTAG instructions USER1...USER4. These are very rarely used by user designs, but Xilinx utilities like ChipScope and the in-system SPI programming cores use them to communicate with the FPGA without needing additional connections.

The primitive is named BSCAN_SPARTAN6 in Spartan-6 and BSCANE2 in 7 series. As far as I can tell, both are functionally equivalent.


BSCAN_SPARTAN6 #(
.JTAG_CHAIN(1)
)
user1_bscan (
.SEL(instruction_active),
.TCK(tck),
.CAPTURE(state_capture_dr),
.RESET(state_reset),
.RUNTEST(state_runtest),
.SHIFT(state_shift_dr),
.UPDATE(state_update_dr),
.DRCK(tck_gated),
.TMS(tms),
.TDI(tdi),
.TDO(tdo)
);

The JTAG_CHAIN parameter specifies which of the four user instructions to use. I'll summarize the interesting ports below including some notes:
  • SEL goes high whenever USERx is loaded into the instruction register, regardless of the test state machine's current state.
  • CAPTURE, RESET, RUNTEST, SHIFT, UPDATE are one-hot flags that go high when the corresponding DR state is active. When the state machine is in the IR shift path, all flags are held low.
  • TMS is of little practical use since the state machine is already implemented for you.
  • TCK provides direct access to the JTAG clock. (Be sure to create a timing constraint for any signals clocked by this net.) In my experience the Xilinx tools often do not recognize this signal as a clock and use high-skew local routing; manual insertion of a BUFG/BUFH is advised for optimal results.
  • TDI and TDO are connected to the corresponding JTAG pins when in the SHIFT-DR state. You can connect any fabric logic you want to them.
Given this core plus libjtaghal on the PC side, we have a solid framework for building an on-chip debug system! The first step is to decide what sort of data to move over the link. Since my framework is NoC based, raw NoC frames seemed the natural choice. This would create a sort of layer-3 VPN encapsulating RPC/DMA transactions within JTAG scan operations.

After some experimenting with protocols I came up with one that seemed to work reasonably well. USER1 is the status/control register, USER2 is the RPC data register, and USER3 is the DMA data register. USER4 is left free for future expansion.

The FPGA side of the link is a module called JtagDebugController. It exposes RPC and DMA ports to the NoC; my current convention calls for addresses in subnet c000/2 to be routed to the debug bridge.

I'm deliberately not describing the actual on-wire protocol in depth because it's still in flux; when I get closer to a stable release I'll document it somewhere.

The PC side of the link is a C++ application using libjtaghal called "nocswitch". Example usage:

$./x86_64-linux-gnu/nocswitch --server localhost --port 50100 --lport 50101
Emulated NoC switch [SVN rev 1253:1254M] by Andrew D. Zonenberg.

License: 3-clause ("new" or "modified") BSD.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Connected to JTAG daemon at localhost:50100
Querying adapter...
Remote JTAG adapter is a Dev board JTAG (232H) (serial number "FTWOON60", userid "FTWOON60", frequency 10.00 MHz)
Initializing chain...
Scan chain contains 1 devices
Device 0 is a Xilinx XC6SLX25 stepping 2
Virtual TAP status register is 1000adba
Valid NoC endpoint detected

This spawns a nocswitch listening on localhost:50101 connecting to a jtagd at localhost:50100.

Once nocswitch is running, it polls the status register on USER1 constantly waiting for the "new RPC message" or "new DMA message" bit to be set. (This causes a lot of traffic on the nocswitch-jtagd link and uses a decent amount of CPU on the host; my custom 8-port ICE will include FPGA based polling and an onboard nocswitch along with the jtagd's to avoid this problem.)

Client applications can then connect to nocswitch via a TCP-based protocol. The nocswitch assigns an address in c000/2 to each client in a manner somewhat reminiscent of DHCP; client applications (on the same machine or elsewhere on the LAN) can then send and receive NoC packets directly to the device under test. Multiple clients are fully supported; the nocswitch performs layer-2 switching between clients and the DUT as needed.

Nocswitch is able to switch frames from one client to another as well as just to the DUT; this permits a client to send messages to a NoC address without caring about whether it's a core in the SoC, a PC-side unit test, or even an RTL simulation (my mechanism for doing the latter will be described in a future post).

From a test case author's perspective, the NocSwitchInterface class implements the RPCAndDMAInterface class and supports the usual complement of operations.

printf("Connecting to nocswitch server...\n");
NOCSwitchInterface iface;
iface.Connect(server, port);

uint16_t eaddr = nameserver.ForwardLookup("eth0");
printf("eth0 is at %04x\n", eaddr);

printf("Resetting interface...\n");
iface.RPCFunctionCall(eaddr, ETH_RESET, 0, 0, 0, rxm);

Finally, here's a sneak peek at what's coming in future posts:
  • Hardware cosimulation, including a workaround for ISim's lack of Verilog PLI support
  • Splash, my build system inspired by Google Blaze
  • RED TIN, my internal logic analyzer (ChipScope/SignalTap replacement with lots of features useful in my work, like state machine decoding, RLE, and time-scale compression)
  • A look at both the hardware and software sides of the infrastructure for my dev board farm (batch scheduling, distributed build, automated testing, managed power distribution, and more). Hooking a single board up to a single JTAG dongle works fine if you only have one device but becomes a lot more of a pain to maintain when you have over twenty dev boards with more on the way!

by Andrew Zonenberg (noreply@blogger.com) at October 14, 2013 06:34 AM

October 12, 2013

Free Electrons

Videos from Embedded Linux Conference 2013

San FranciscoBetter late than never: we are finally publishing a set of videos of 24 talks from the last Embedded Linux Conference, which took place earlier this year in San Francisco, California. These videos are coming in addition to the videos that the Linux Foundation had posted from this conference on video.linux.com.

Our videos are the ones from other talks, covering topics such as I2C, the BeagleBone, the Common Display Framework, Kernel debugging, Memory management in the kernel, usage of SPDX in Yocto, the SCHED_DEADLINE scheduler, the management of ARM SoC support in the kernel, real-time, kernel testing, and more. We’re also including below the full set of videos from the Linux Foundation, so that this page nicely gives links to all the videos from Embedded Linux Conference 2013.

Our videos

David AndersVideo capture
Texas Instruments
Board Bringup: You, Me and I2C
Slides
Video (38 minutes):
full HD (269M), 800×450 (151M)


Jayneil DalalVideo capture
Texas Instruments
Beaglebone Hands-On Tutorial
Slides
Video (66 minutes):
full HD (444M), 800×450 (249M)


Jesse BarkerVideo capture
Linaro
Common Display Framework BoF
Video (113 minutes):
full HD (761M), 800×450 (389M)



Alison ChaikenVideo capture
Mentor Embedded Software Division
Embedded Linux Takes on the Hard Problems of Automotive
Slides
Video (54 minutes):
full HD (359M), 800×450 (152M)


Kevin ChalmersVideo capture
Texas Instruments
RFC: Obtaining Management Buy-in for Mainline Development
Slides
Video (36 minutes):
full HD (253M), 800×450 (140M)


Michael ChristoffersonVideo capture
Enea
Yocto Meta-Virtualization Layer Project
Slides
Video (47 minutes):
full HD (330M), 800×450 (187M)


Kevin DankwardtVideo capture
K Computing
Survey of Linux Kernel Debugging Techniques
Slides
Video (50 minutes):
full HD (350M), 800×450 (206M)


Ezequiel Alfredo GarciaVideo capture
VanguardiaSur
Kernel Dynamic Memory Allocation Tracking and Reduction
Slides
Video (56 minutes):
full HD (398M), 800×450 (235M)


Christopher FriedtVideo capture
Research In Motion
Gentoo-Bionic: We Can Rebuild Him. Better. Stronger. Faster.
Slides
Video (39 minutes):
full HD (272M), 800×450 (154M)


Gregoire GentilVideo capture
Always Innovating
Lessons Learned in Designing a Self-Video, Self-Hovering Nano Copter
Video (56 minutes):
full HD (391M), 800×450 (225M)



Mark Gisi, Mark HatleVideo capture
Wind River Systems
Leveraging SPDX with Yocto
Video (53 minutes):
full HD (376M), 800×450 (204M)



Yoshitake KobayashiVideo capture
TOSHIBA Corporation
Deadline Miss Detection with SCHED_DEADLINE
Slides
Video (38 minutes):
full HD (274M), 800×450 (158M)


Tetsuyuki KobayashiVideo capture
Kiyoto Microcomputer
Tips of Malloc and Free
Slides
Video (39 minutes):
full HD (277M), 800×450 (163M)


Tristan LelongVideo capture
Adeneo Embedded
Debugging on a Production System
Slides
Video (51 minutes):
full HD (354M), 800×450 (195M)


Noor UI MubeenVideo capture
Intel Technology India Pvt Ltd
Making Gadgets Really “cool”
Slides
Video (45 minutes):
full HD (298M), 800×450 (122M)


Hisao MunakataVideo capture
Renesas Electronics
How to Cook the LTSI Kernel with Yocto Recipe
Slides
Video (42 minutes):
full HD (295M), 800×450 (166M)


Olof JohanssonVideo capture
Google
Anatomy of the arm-soc git tree
Slides
Video (50 minutes):
full HD (348M), 800×450 (192M)


Mark OrvekVideo capture
Linaro
Application Diversity Demands Accelerated Linux Innovation
Slides
Video (38 minutes):
full HD (273M), 800×450 (158M)


Thomas PetazzoniVideo capture
Free Electrons
Your New ARM SoC Linux Support Checklist!
Slides
Video (60 minutes):
full HD (418M), 800×450 (231M)


Matt PorterVideo capture
Texas Instruments, Inc.
Kernel Testing Tools and Techniques
Slides
Video (60 minutes):
full HD (405M), 800×450 (230M)


Brent RomanVideo capture
Monterey Bay Aquarium Research Institute
Making Linux do Hard Real-Time
Slides
Video (24 minutes):
full HD (173M), 800×450 (101M)


Mans RullgardVideo capture
ARM/Linaro
Designing for Optimisation
Slides
Video (50 minutes):
full HD (353M), 800×450 (202M)


Chris SimmondsVideo capture
2net Limited
The End of Embedded Linux (as we know it)
Video (46 minutes):
full HD (293M), 800×450 (137M)



Hunyue YauVideo capture
HY Research LLC
uCLinux for Custom Mobile Devices
Slides
Video (40 minutes):
full HD (283M), 800×450 (151M)

Linux Foundation videos

Joo-Young HwangVideo capture
Samsung Electronics Co., Ltd.
F2FS, Flash-Friendly File System
Slides
Video : on video.linux.com


Linus WalleijVideo capture
ST-Ericsson
Pin Control and GPIO Update
Slides
Video : on video.linux.com


Mark GrossVideo capture
Intel
The ‘Embedded Problem’ as Experienced by Intel’s Reference Phones

Video : on video.linux.com


Gap-Joo NaVideo capture
Electronics and Telecommunications Research Institute (ETRI)
Task Scheduling for Multicore Embedded Devices
Slides
Video : on video.linux.com


Joel FernandesVideo capture
Texas Instruments, Inc
FIT Image Format: Inspired by Kernel’s Device Tree
Slides
Video : on video.linux.com


Steven RostedtVideo capture
Red Hat
Understanding PREEMPT_RT (The Real-Time Patch)
Slides
Video : on video.linux.com


Ruud DerwigVideo capture
Synopsys
Using GStreamer for Seamless Off-loading Audio Processing to a DSP
Slides
Video : on video.linux.com


Rob LandleyVideo capture
Multicelluar
Toybox: Writing a new Linux Command Line from Scratch
Slides
Video : on video.linux.com


Denys DmytriyenkoVideo capture
Texas Instruments
Pre-built Binary Toolchains in Yocto Project
Slides
Video : on video.linux.com


Anna DushistovaVideo capture
Me, Myself and I
Target Communication Framework: One Link to Rule Them All
Slides
Video : on video.linux.com


Jim HuangVideo capture
0xlab
olibc: Another C Runtime Library for Embedded Linux
Slides
Video : on video.linux.com


Jake EdgeVideo capture
LWN.net
Namespaces for Security
Slides
Video : on video.linux.com


Beth FlanaganVideo capture
Intel
Listening to your Users: Refactoring the Yocto Project Autobuilder

Video : on video.linux.com


Katsuya MatsubaraVideo capture
– , IGEL Co., Ltd.
Optimizing GStreamer Video Plugins: A Case Study with Renesas SoC Platform
Slides
Video : on video.linux.com


Behan WebsterVideo capture
Converse in Code Inc
LLVMLinux: Compiling the Linux Kernel with LLVM
Slides
Video : on video.linux.com


Jim Zemlin, George GreyVideo capture
The Linux Foundation, Linaro
Working Together to Accelerate Linux Development

Video : on video.linux.com


Andrew ChathamVideo capture
Google
Google’s Self-Driving Cars: The Technology, Capabilities & Challenges
Video : on video.linux.com


Laurent PinchartVideo capture
Ideas on board SPRL
Anatomy of an Embedded KMS Driver
Slides
Video : on video.linux.com


Scott GarmanVideo capture
Intel Open Source Technology Center
Atom for Embedded Linux Hackers and the DIY Community
Video : on video.linux.com


Mike AndersonVideo capture
The PTR Group, Inc.
Controlling Multi-Core Race Conditions on Linux/Android
Video : on video.linux.com


Tracey Erway, Nithya RuffVideo capture
Intel Corporation, Synopsys
Can You Market an Open Source Project?
Video : on video.linux.com


Dave StewartVideo capture
Intel
Code Sweat: Embed with Nightmares
Video : on video.linux.com


Gregory ClementVideo capture
Free Electrons
Common Clock Framework: How to Use It
Slides
Video : on video.linux.com


Sean HudsonVideo capture
Mentor Graphics
Building a Custom Linux Distribution with the Yocto Project
Slides
Video : on video.linux.com


Tzugikazu SHibataVideo capture
NEC
How to Decide the Linux Kernel Version for the Embedded Products to Keep Maintaining Long Term
Slides
Video : on video.linux.com


Mathieu PoirerVideo capture
Linaro
In Kernel Switcher: A Solution to Support ARM’s New big.LITTLE implementation
Slides
Video : on video.linux.com


Russell DillVideo capture
Texas Instruments
Extending the swsusp Hibernation Framework to ARM
Slides
Video : on video.linux.com


John MehaffeyVideo capture
Mentor Graphics
Security Best Practices for Embedded Systems
Slides
Video : on video.linux.com


Leandro PereiraVideo capture
ProFUSION Embedded System
EasyUI: No Nonsense Mobile Application Development with EFL

Video : on video.linux.com


Khem RajVideo capture
OpenEmbedded
Bringing kconfig to EGLIBC
Slides
Video : on video.linux.com


Aaditya KumarVideo capture
Sony India Software Centre Pvt Lltd
An Insight into the Advanced XIP Filesystem (AXFS)
Slides
Video : on video.linux.com


Pantelis AntoniouVideo capture
Antoniou Consulting
Adventures in (simulated) Assymmetric Scheduling
Slides
Video : on video.linux.com


Mike Anderson, The PTR group; Zach Pfeffer, Linaro; Tim Bird, Sony Network Entertainment; David Stewart, Intel; Karim Yaghmour, Opersys (Moderator)Video capture

Is Android the new Embedded Linux



Video : on video.linux.com


George Grey, CEO, Linaro, Jim Zemlin, Executive Director, The Linux FoundationVideo capture

Working Together to Accelerate Linux Development

Video : on video.linux.com


Frank RowandVideo capture
Sony Network Entertainment
Using and Understanding the Real-Time Cyclictest Benchmark
Slides
Video : on video.linux.com

by Thomas Petazzoni at October 12, 2013 01:32 PM

October 08, 2013

Richard Hughes, ColorHug

How to take 16:9 Screenshots

A few people contacted me after discovering that screenshots should be taken in a 16:9 aspect ratio. The question was basically, how do I do that?

After trying to do this myself for my applications, I too discovered it’s hard. The wmctrl command doesn’t seem to play very nicely with CSD, and certainly won’t work in wayland. So, Owen Taylor (GNOME hacker extraordinaire) to the rescue.

Owen wrote a GNOME Shell extension (which I’ve modified a little) which resizes the current window to a 16:9 size when you press Ctrl+Alt+S. If you press it again, the window will get larger, to the next recommended 16:9 size. Press Ctrl+Alt+Shift+S and the window will get smaller to the previous size. Get the code on github or on extensions.gnome.org. I probably need some more testing as well. Patches very welcome if you’re good at shell extensions :)

With this extension installed I was able to screenshot all my applications that I maintain in no time at all. I’ve been uploading the screenshots into ${projects}/data/appdata and referencing the git.gnome.org URL in the AppData file so anyone can easily update them when the code changes, but this is completely up to you.

by hughsie at October 08, 2013 02:29 PM

October 07, 2013

Richard Hughes, ColorHug

AppData Validation Tool Update 2

There’s a new version of the AppData validation tool, and this one actually downloads and validates the screenshots you’ve included in the .appdata.xml files. See the AppData specifications for guidelines on how to make good screenshots.

Get it while it’s hot, appdata-tools-0.1.4.tar.xz or I’ve built Fedora RPMs too.

Please yell if you think you’ve written a valid file, and it fails to validate using the tool. We’ve added a lot more checks, and we’re getting more strict — so please revalidate your file if you’ve already done so. Thanks!

by hughsie at October 07, 2013 02:45 PM

October 06, 2013

FreakLabs

Announcements: chibiArduino v1.03 Release and New Walkthrough Tutorial Series

I'm happy to announce that I just released the chibiArduino library v1.03. The library functionality is very stable these days and the main changes were minor bug fixes and updates. The main change is that settings for the Freakduino long range wireless board were added and tuned. I also did a major update to the chibiArduino usage documentation which hadn't been updated since 2010. It was painful reading through it and I really need to be more disciplined about maintaining...

October 06, 2013 06:07 AM

ZeptoBARS

KR1858VM1 - soviet Z80 : weekend die-shot

KR1858VM1 - Z80-compatible CPU, manufactured in USSR. Die marking "U880/6" suggests that it was also designed in east Germany company VEB Mikroelektronik "Karl Marx" in Erfurt (MME). Compared to T34VM1 die size is shrunk by a factor of 1.6, reworked IO.

Also, you can compare it to MME Z80A and Zilog Z80.

Die size 3601x3409 µm.


October 06, 2013 05:37 AM

FreakLabs

Freakduino Long Range Wireless Board WalkThrough - Basic Usage

This is a walkthrough of the basic setup and usage of the Freakduino long range wireless boards. I originally designed the Freakduino board series and chibiArduino software stack so that it could be a simple way to setup a wireless connection without having to understand complex protocol details. This was a big drawback in many of the more advanced protocol stacks I've worked on where there was complex and detailed knowledge required just to send simple packets. I tried to...

October 06, 2013 02:32 AM

October 04, 2013

ZeptoBARS

Fairchild 74F109PC - dual JK flip-flop : weekend die-shot

Fairchild 74F109PC - dual JK flip-flop, part of fastest bipolar 7400 TTL family - F.

Die size 1436x1255 µm.


October 04, 2013 08:39 PM

Video Circuits

Jacques Guyonnet

Jacques Guyonnet a swiss composer and video artists who worked with Geneviève Calame whose video I posted here
thanks to youtube user apocaline




by Chris (noreply@blogger.com) at October 04, 2013 02:23 AM

October 03, 2013

OggStreamer

#oggstreamer – 54 pcs. finished :=)

I am happy to say that I just finished the work on the Hardware – 54 units are now assembled – although the Firmware still needs some tweaking and bug fixing this was one major step today :)

IMG_20131003_130332

The OggStreamer is waiting for you ….


by oggstreamer at October 03, 2013 01:43 PM

October 02, 2013

Elphel

FPGA is for Freedom

In this post I write about our current development, my first experience with Xilinx Zynq, and also try to summarize the 10+ years experience with Xilinx FPGA devices. It is a mixture of the admiration for their state of the art silicon devices and frustration caused by the software. Please excuse my sometimes harsh words and analogies – I really would like to see Xilinx prosper and acquire software vision that matches the freedom that Ross Freeman brought to developers of the electronic devices when he invented FPGA and started Xilinx.

Before the new camera design started

We planned to update our current line of cameras for some time – Elphel current model NC353 is in production for almost 7 years. Thanks to the Xilinx FPGA, it is possible to upgrade it long after it was built. In 2009 we developed the new system board, built a first unit and started working with it. This board was designed around new (in 2009) Xilinx Spartan 6 and Texas Instruments DaVinci processor. Memory and the CPU performance were increased, the board could support two sensors simultaneously (instead of just one in the older models), but in the 10373 camera system board I was not satisfied with the bandwidth of the datapath between the FPGA and the processor – it was enough for current sensors but in my opinion it did not have enough margin for the future sensor upgrades and we decided to put this project on hold and look for the better match between the CPU and FPGA.

Later we heard the news about the coming Xilinx Zynq devices, but initial rumors indicated that it is very unlikely these chips will be supported by freeware development software. Luckily, that proved to be wrong and Xilinx announced that most of the devices (excluding only the largest XC7Z045) will be supported by the free for download WebPack. Zynq combines dual core ARM CPU (with a rich set of standard peripherals) and high performance FPGA on the same chip, so it should be an exact match for our purposes and intrinsically high bandwidth between CPU and FPGA – parameter that killed our NC373 camera before it was born.

Impressed by Zynq when working on the board design

The news was really exciting, and I was waiting impatiently for the new devices to become available and the free for download status of the required software to be confirmed – many of Elphel customers are developers and we can not force them to acquire software that would be more expensive than the hardware they purchase from us. By June 2013, when I was able to designate time for the full time work on the new project, both conditions were met and I started working on the circuit and PCB design. Zynq features looked very nice and documentation was quite sufficient to work on the design, it turned out to have some little but very convenient bonuses like decoupling capacitors embedded in the package – we use memory mounted on the opposite to the CPU side of the board so it is difficult to have short decoupling connections for both of them. High speed serializer/deserializer capability of virtually all of the I/O pins made it possible to have the dual-function sensor port connectors compatible with our current sensor front ends (SFE) with 12-16 bit parallel interface and capable of running serial links (such as multi-lane MIPI). Backward compatibility will reduce time before we’ll be able to start shipping NC393 cameras (and replace system boards in our Eyesis line of products), high-speed serial capability will allow cameras to keep up with new emerging high-performance sensors.

Initially, I planned to have only 3 sensor ports: one GTX to implement SATA interface, some GPIOs for inter-camera synchronization and interfacing daughter-boards (similar to what we had on our 10369 interface board for the NC353 camera) and dedicated DDR3 memory. Yes, Zynq has really nice access from the PL (programmable logic – FPGA part of the chip) to the system memory, but it is still beneficial to have memory that is not shared with the CPU and has a specialized controller fine-tuned for image processing applications. And I thought I’d need 676-ball package to fit all external devices. But by carefully going through the documentation, I realized that with the flexible I/O banking of Zynq it is possible to fit everything needed in a significantly smaller 484-ball package and to have four (instead of just three) sensor ports.

 A small cloud on the horizon

When working on the circuit design, I needed to make sure that the pins I designate for the DDR3 memory interface are valid – such interface implementation is rather challenging and there are multiple rules that have to be satisfied simultaneously. Even as we do not plan to use Xilinx stock memory controller in the camera, I thought that the software “wizard” that instantiates it in the design may be a good tool to verify the selected pinout – that’s all that I needed at this stage of the design. So I went ahead to install the software. During this process, I learned that to use freeware software (and I already explained why it is the only kind of the non-free software we can use for our products), I have to install mandatory component that transmits data from my computer to Xilinx. It is funny – being a free software/open hardware company, we post all our development files on Sourceforge, but they still prefer to dig in our “dirty laundry”. This was very unpleasant news, and the license agreement stated that, because of the nature of the Internet, they have no responsibility if any of the information they get from my computer will accidentally get to where it was not supposed to get to. OK, I decided, I’ll deal with it later when I’ll really need it to work on the FPGA design; for now, I just need to install it and try the memory controller generator, then after; uninstall the software (hopefully together with the spy agent).

Unfortunately, as it often happens, the “wizard” turned out not to be smart enough, and it told me that the 16-bit wide DDR3 interface I needed will not fit. I did verify the rules stated in the documentation again, searched online information on questions and answers about similar cases – all confirmed that the capable Zynq silicon could handle the job, but the software “wizard” prohibited it. It is quite understandable that software programs have their limitations, but when the software pretending to be “smart” is inflexible, when it (as most of the non-free code) does not allow user to comment out (to disable/bypass) specific checks, it causes frustration. So this software tried to make Zynq look less capable than it actually is, and also tried to convince me that instead of the 484-ball package, I should use larger 676-ball one, leaving less room for other components. Larger package would be more expensive for our customers too, of course.

So I just decided to move on with the circuit/PCB design regardless of my disagreement with the software – this development was described in the several previous blog posts.

By the early August, the PCB design of the Zynq-based camera system board (together with the two companion boards) was finished. I went through all the design again trying to weed out as many design errors as I could, and later that month we released the files into production. While waiting for all the components to come and the PCB to be manufactured, I started to look at the first steps in the software development I will need to be able to verify the board design. I was expecting to take the U-boot files developed for existent Zynq-based evaluation boards and tweak them to match our hardware – a rather straightforward process I did before when breathing in life in other systems. So first make U-boot work, then – proceed with the Linux kernel – both “Linux” and “U-boot” were mentioned in the documentation so I was sure I understand the overall process. I was wrong.

FSBL – a piece of proprietary code generated by the proprietary tools

Of course I understand that it may take another ten years before Xilinx will realize that the combination of the “blank tape” idea of the FPGA that they pioneered with the “totalitarian” style of development tools software is not very efficient – I’ll get to this topic later in the post. At the moment I was just looking for the Open Embedded – based distribution for existent boards that I can modify for our hardware. Internet search revealed that I still have to use proprietary tools to generate the first stage boot loader (FSBL) – piece of code responsible for the hardware initialization. This code is launched by the RBL – embedded in the chip ROM boot loader and in its turn the FSBL (starting from the Zynq OCM – internal on-chip memory) initializes external DRAM, loads and launches U-boot. Then it is the U-boot’s responsibility to take it from there and load and pass control to GNU/Linux (in the sequence that interests us). Starting with U-boot, all the code is Free Software (under mandatory for this software GNU GPL license), but not the FSBL. OK, I thought – I’ll use the tools to generate a binary blob and we’ll distribute it with our cameras. Elphel users will be able to use just the free software to re-build the camera flash image as they want. Binary blobs are nasty, and Richard Stallman would likely refuse to deal with our cameras, but we are living in the real world and so need something to start with – we can try to replace that piece of code later.

What I was not sure about was the legal status of such distribution, at least all the text files generated had Xilinx copyright and “all rights reserved” notices in the header. Funny thing is that they also have “this file is automatically generated” in the same header. To me “generated” sounds more like “created” than “copied” or “compiled” and I did not know that robots are already recognized as authors of the original works covered by the Copyright Law. So I asked this question on Xilinx forum but I was not able to get a clear answer to that question – can we redistribute FSBL custom-generated by Xilinx tools for our hardware?

We did try to generate FSBL with the tools – I failed to install the software on my computer – probably because it had too old of a version of Kubuntu and there was a conflict between the libc6 on my system and the licensing software (this funny make-pretend licensing of freebies). Oleg was luckier than me – he has a current Kubuntu version, but his operating system was still not perfect and did not completely match the development tools. When he tried to re-assign MIO pins in the tools GUI – nothing seemed to happen. Later he discovered that it actually did change; it just did not show the changes. So when he pressed “Save” and opened the same page again, there were the new (modified) values there. A little trick, but it made possible to proceed with the tools.

There are other things that I did not like in the recommended way of the FSBL generation. One is that though I usually prefer a nice GUI to the “black screen” of the command line interface, there are some definite limitations. I like GUI when it saves me from remembering a lot of commands and command options – it could be OK if I had to do my job in a relatively small area. But in a small company, we have to often switch from mechanical design to web development, Verilog code debugging, kernel drivers or image processing – all these activities have their specific tools. But GUI for new board configuration is not that useful according to my personal experience. A standard configuration file with many properly commented settings is more convenient. Configuring a new Zynq-based board for most developers is something they do not need to perform a dozen times a day – once a year is a more reasonable estimate. When you develop a new board you have to go through many manual steps: studying documentation, looking for the board components, and developing a circuit diagram and PCB layout. Going through a long list of settings, reading comments and optionally modifying some values is a very useful process for the new board, as it can help to avoid design errors that would be left unnoticed if you just clicked on several GUI buttons. Adding more configuration parameters to GUI is usually more expensive than just defining more configuration values, so more parameters are likely to be hard-coded in the software and so out of user control. Another problem of the GUI approach – I was concerned I would eventually hit a similar problem I already hit with the smart Memory Interface Generator I described above, the problem that was always a nightmare for me when I had to upgrade the FPGA development tools – new version often refused to compile the code that worked with the old version, changed the rules that are impossible to bypass. And as the code is closed, you do not have many options to tell the software that you are the boss, not it.

Configuring Zynq hardware for a commercial evaluation board with GUI – it may look cool, but the configuration is mostly already defined by the board design, so each board can come with the board-specific long and boring (but nicely commented) configuration file.

 The Ezynq project

Considering all these shortcomings of the use of the FSBL I decided to evaluate feasibility of bypassing this proprietary code completely. According to Xilinx documentation, it seemed possible, and we did not need all of the functionality of the FSBL and the FSBL generation software. We definitely do not need booting of the secret code (Zynq has elaborate hardware and software support for such feature); we also do not need to configure the FPGA portion (PL) until the system is running operating system (FSBL allows early configuration). Our plan was to add extra functionality (previously handled by FSBL) to U-boot itself so all the board configuration is done with #define CONFIG_* statements in the appropriate header files. To prevent conflict between the new parameters and already existent Zynq-related ones in U-boot name scope, we added ‘E’, starting all the parameters with “CONFIG_EZYNQ_” – this is where the project name came from. The project is available in Elphel Git repository at Sourceforge.

For this unexpected project, we purchased a nice small MicroZed evaluation board (it is the first evaluation board I ever used in my career) so we had an official software that boots and runs on this board. Even implementation of the subset of the FSBL functionality, with configuration files ready for only one board, having several known (and probably plenty of unknown) bugs, took me a whole month of programming. In that process I had to go through the documentation on many of the Zynq peripherals and their control registers, DDR3 memory interface – that will likely help me when developing the software for the actual camera. While working on the reimplementation, I was comparing the generated FSBL output against documetation and noticed several mismatches between the two, but none seem to be critical. Our code will need some cleanup – at the beginning I did not know the exact details of what will be needed, and this is my first program in Python, but the program proved to work and we’ll maintain it and use it with future Elphel camera software distributions. I also believe that there are other developers who share my view that the best FPGA silicon on the planet deserves different software, software made for the developers – not just for the cool looking presentations. And we would like other developers to try this code, creating configuration files for the Zynq-based boards they have. There are more technical details in the README file in the git repository and we are always willing to answer questions about this program.

 Why I believe Xilinx will turn towards Free Software

When Ross Freeman, FPGA inventor and one of the Xilinx founders, compared the new device with a “blank tape,” he defined the future of the new class of the devices; devices where the user, and not the chip manufacturer, is in full control. It would be like it was with the magnetic tapes where people could record whatever they liked, and not just what the record companies did. It was especially important in the USSR, where I was born – the most famous and loved by the Soviet people Russian singer, Vladimir Vysotsky, “lived” mostly on the magnetic tapes recorded by people against the will of the Soviet government. Magnetic tapes were the medium that brought us the Beatles – we loved them and perceived them as a “Band of Freedom.”

Freedom is the intrinsic feature of the FPGA. I think it is better than “Field” for the first letter in the acronym. Unfortunately, the analogy with the “blank tape” does not go much farther – in the non-free country, we were free to use any brand of the tape recorder (domestic or brought from abroad) with the same tape. If the Soviet government had the same level of control over the recorders as the FPGA manufacturers have now over the required development tools, we would never be able to listen to Vysotsky or the Beatles.

Some ten years ago, Wim Roelandts, then CEO of Xilinx, had a presentation in Salt Lake City that I attended. When answering questions, he said that more than 98 percent of the company revenue comes from the FPGA (“blank tape”) sales, and less than two percent from the software. Maybe the numbers have changed by now, but I do not think the difference is radical.

I can only guess at what the rationale behind the idea of reducing the value of the main (98 percent) product for the questionable benefit of a two percent byproduct is. They probably can not believe that freedom may be monetized, it increases the value (and the lack of it – decreases) of the underlying product by more than those tiny two percent. They may think that it is irrelevant, and as they produce the best tape in the world, they should use it to the competitive advantage of their tape recorders.

There is the other side of this. Totalitarianism is not competitive in the long run. The USSR was strong in the middle of the 20th century and was able to win against Hitler in WWII. Just 10 years before its collapse, I could not believe that any change would happen in my lifetime – but there is no more USSR now. In the end of the last century (and the beginning of this one), Microsoft was considered the most successful software company, a model for others. And I see some similarity between the two – trying to keep everybody under control – be it with the help of the KGB or EULA. Soviet people did not have private property (only so called “personal property”) – virtually everything belonged to the State. Same with the users of proprietary software – you do not own what you paid money for, you are just granted a temporary right to use it. Microsoft is far from over, of course, but it has seen better times, and few are considering it as a powerful Empire now. Yes, they still dominate on the desktops, but the same approach failed in the modern areas of the web and mobile devices. In these days you have to give more control to the users – or risk becoming irrelevant. Initially Apple tried hard to prevent “jail-breaking” and not to let people to install their own software. Yes, they sure still have a lot of control, but even they had to yield some of it under the pressure of the users and competitors. It is even more valid for the faster growing Linux-based Android devices.

Xilinx itself is gradually migrating towards Free Software, at least for the code that runs on their devices. I believe this process is welcomed by Xilinx developers (who made a great job in coding Free software submitted to at least Linux kernel and U-boot) but is still not embraced completely by the management who (software-wise) got stuck in the 20th century, when the microsoviet type of the program was a model to follow. But this fight is an uphill battle, and they have to “surrender” more and more. Xilinx SDK is already based on Free Software Eclipse IDE and software components licensed under GNU GPL. I count on this trend and think that it will provide Xilinx with their own experience and prove to them that developing Free Software gives more value in return by expanding application areas and results in increased market share for the devices.

But this shift to Free Software does not yet apply to the main part of the software tools – tools for the FPGA or programmable logic (PL) in terms of Zynq development.

The Xilinx proprietary stronghold that still seems as stable as the USSR in early 1980-s is the FPGA development tools. They do not see much pressure to stop effectively crippling their hardware by the software because 1) Xilinx FPGAs are still the best and 2) Xilinx competitors cripple their products no less than Xilinx does itself. When I first started using reconfigurable FPGA in 2002, I was considering Altera too, but even their freebie software license had to be renewed each 3 months, so there was no guarantee that you’ll always be able to use the code you previously developed.

Competition on the FPGA market is increasing, and in addition to the traditional Xilinx+Altera duopoly, new players are emerging, such as Achronics and Tabula. It seems to me, however, that their bet to beat duopoly is based on the sheer technological advantage of the Intel 14nm process, not on the developer-friendly software that can really make a difference in this field.

Installation of the “spyware” as a mandatory component of the freeware FPGA development tools (in the paid-for versions this functionality may be disabled, but it is on by default) seems to be considered of high value – otherwise they would not risk alienating their loyal customers. Why do they do it? Probably in a desperate move to get more of the real life examples to improve their place and route and other related algorithms. I am not a specialist in these algorithms, but generally they are NP-hard and there are many approaches how to find good-enough solutions and improve them. And this involuntary feedback through the spyware is needed to train the algorithms being developed. Translated to USSR analogy, it would be as utopian as to assign 3 KGB agents to every citizen to find out what each of them wants and then decide in some centralized way how to make them all feel happy. Or Apple watching on the customer use of the phones to guess what they need and designing all the apps in-house that are currently available from the independent developers. Proprietary operating systems closed to developers and fully controlled by a single company already proved their inferiority on the mobile devices where they faced a real competition.

Xilinx has a unique opportunity to change this unfortunate state. They develop, produce and sell the Real Things, and Xilinx can become as recognized in FPGA development software, as it is recognized for the FPGA devices now. They are in a position not just to invest heavily in the Free Software infrastructure as IBM and other companies do, but to do much more: jump-start and lead the new class of the FPGA development tools – tools where users are partners, not just the subjects of the surveillance. Starting and maintaining a framework of the Free (not freeware, like WebPack) tools could make a real difference and create value, like independently designed apps create value for Apple or Android gadgets. Just look around – it is the second decade of the 21st century, not the late 20th. Let users (and Xilinx users are really smart developers) get to the controls – they will innovate, and some may find solutions that would never come to the mind of Xilinx staff engineers.

One may say that Xilinx already has an App Store equivalent, but the marketplace for IP cores (“vinyl records” that can be copied to the “magnetic tapes” under certain conditions) is not a substitute for the free and open FPGA development framework – users can exchange (under various free and non-free licenses, with or without compensation) their “tape records” themselves without any Xilinx involvement. In our current design, we too plan to use at least one Verilog module designed by others under GNU GPL license, and we will handle it between us and the developer directly. The other difference is that iPhone users are just phone users and the apps they download increase the functionality (and, in effect, the value) of the phone they purchase. When an FPGA developer uses a core designed by others – she just gets part of her job already done. But the increased functionality of the tools is still needed, and this functionality is usually related to much more elaborate activity than that of the casual phone app user, and FPGA developer is more likely to be able to contribute back. That does not mean, of course, that many developers will contribute new P/R algorithms, but evaluating different algorithms (including experimental ones), tweaking parameters of the goal functions – especially when the default setup can’t make it for the user - this is what many (myself included) can do. It is especially likely to happen if the users are provided with some meaningful comments on the nature of the algorithms and variable parameters.

Such development framework will make it possible for independent researchers to experiment with the new methods of (for example) timing closure, and Xilinx will have different ways to encourage (and in some cases sponsor) such development that will require less investments than when everything critical is done in-house and behind the closed doors.

When implemented, such an approach will provide multiple advantages:

  • Effectively increase the value of Xilinx silicon devices: unleash more of their power and hand it to the users. Such cases as I described above (MIG pushing me to use larger than actually needed package) will be eliminated – in my case I would just troubleshoot the MIG code for my case and submit suggested changes (I’m sure I’m not the only one who needs to use x16 DDR3 with Zynq in 484-ball package). And until the needed changes will be included in the main branch, others who need it will just be able to use my modified version.
  • Reduce the cost of the tools software development and increase its capability and quality by integrating Free Software tools (i.e. Icarus Verilog that we use ourselves for simulation of the products based on Xilinx FPGA) and user contributions. These contributions will be enabled by the open code of the software, and users will be more eager to get involved when they are treated as partners.
  • Improve customer relations. I’m sure that it’s not just me who hates the spyware planted on their computers. And Xilinx surely knows this too, so I consider the current state as a desperate measure to bring in the data that customers are reluctant to provide voluntarily. Treating users as partners (and they really should be partners as improvements of the software tools benefit both parties) is a better way to get the needed feedback (and even contributions, as users can do part of the work themselves) than the current model of interaction. Linux kernel gets on average five patches per hour from thousands of developers (Xilinx included) freely.

Is there a risk that competitors will be able to benefit from this Free Software? Sure they will; as anybody else, they will be able to use it. But they will have to play by the same rules. Even if they will be able to copy all the software and adapt it to their products, keeping the code closed (only possible if the license will be weak enough to allow it), their non-free product will have lower value for the users even if the hardware alone has the same (or even higher) performance.

I am not sure if Xilinx has another decade to stay with the old software paradigm, because as the performance and complexity of the FPGA is increasing, the quality of development software gets more important, and “quality” means the real quality for developers, not only the nice-looking interface. So if there will be some new player on the FPGA field that will be able to offer silicon lagging behind the front runners by some 3-4 years, but offering development environment based on Free Software – that company will definitely have a competitive advantage. If that will happen, I’ll go for the software, but I would definitely prefer to have the best of each – superior Xilinx FPGA devices supported by the developer-friendly, Free Software; the only software that matches the essense of the FPGA idea – its freedom.

by andrey at October 02, 2013 10:36 PM

Free Electrons

Increasing activity in the Buildroot community

In the recent times, the Buildroot project has seen a particular high level of activity, with a significant number of new contributors and contributions. It is an interesting opportunity to have a look at some statistics of the project activity in the last years: they show that the Buildroot project is really active, and in rapid development.

First, a look at the number of commits per month is an obvious way of looking at the activity of an open-source project. For two years, the project has seen each month at least 150 commits, and since for the last year, most of the months have seen between 300 and 400 commits.

Buildroot activity in commits

Another interesting data point is that this increasing number of commits is not only due to an increasing effort from the existing core developers, but rather due to an increasing number of contributors. The following graph, which displays the number of unique contributors having had patches merged each month, clearly shows that the Buildroot community is growing. From an average of 10-15 contributors per month a few years back, the project is now having between 30 and 40 unique contributors each month.

Number of Buildroot contributors

The mailing list activity also nicely reflects this increasing activity: it is now receiving almost each month between 1500 and 2000 e-mails, which means between 50 and 65 e-mails per day, and it starts to be difficult to read everything!

Number of Buildroot mailing list posts

Finally, the number of packages has also increased progressively over the last two years. As can be seen on the graph below, the period 2008 → 2011 hasn’t seen a big increase in the number of packages, as it was a period mainly focused on refactoring and cleanup work. After this cleanup work, it seems that Buildroot has started gaining in popularity, and more work was done to add more packages for various useful open-source components in embedded systems. Since 2011, the number of packages has been growing regularly, starting from less than 700 in 2011 to reach almost 1200 packages today.

Number of Buildroot packages

All in all, those four graphs clearly show a nice increase of activity within the Buildroot project, which is really cool!

Some notes on how the data was computed:

  • The number of commits per month was obtained by doing a git log --pretty=online --since=yyyy-mm-dd --until=yyyy-mm-dd | wc -l for each month.
  • The number of contributors was obtained by doing a git shortlog -sn --since=yyyy-mm-dd --until=yyyy-mm-dd | wc -l for each month.
  • The e-mail statistics were obtained by looking at the number of messages displayed in the HTML archives, per month, as in http://lists.busybox.net/pipermail/buildroot/2013-August/thread.html.
  • The number of packages was computed using an approximate method, that consists in counting the number of .mk files in Buildroot (a few .mk files are not packages, but the vast majority of them are). The exact command used was git checkout -q $(git rev-list -n 1 --before=2013-08-01 master) && find . -name '*.mk' | wc -l.

by Thomas Petazzoni at October 02, 2013 12:30 PM

Elphel

NC393 development progress – 3

Just a small update – we received all the 3 boards ordered for the NC393 camera at Fastprint, China. We will have our contract manufacturer install the BGA chips, and then I’ll work again on the tiny 0201 components, like 4 years ago. I love to assemble such boards (but not too often) myself – going through all the components when they are real (not virtual) gives me a different perspective to think about the design.

10393 System board, top side

10393 System board, top side

10389 Interface board, top side

10389 Interface board, top side

10385 Power supply board, top side

10385 Power supply board, top side

10393 System board, bottom side

10393 System board, bottom side

10389 Interface board, bottom side

10389 Interface board, bottom side

10385 Power supply board, bottom side

10385 Power supply board, bottom side

by andrey at October 02, 2013 05:34 AM

October 01, 2013

FreakLabs

Freakduino 900 MHz Long Range Wireless Boards Back in Stock

I received an unexpected surge in orders for the Freakduino 900 MHz Long Range Wireless boards (http://www.freaklabsstore.com/index.php?main_page=product_info cPath=22 products_id=211) due to mention on the Make Magazine site (http://makezine.com/2013/09/30/freakduino-900-mhz-goes-long-distance/) and also in various news outlets (http://www.heise.de/newsticker/meldung/Groessere-Funkreichweite-mit-dem-Freakduino-1970518.html) . I didn't plan a large initial run for these boards since I figured that they would mostly appeal to wireless sensor network enthusiasts. The interest was greater than I expected and the boards sold out quickly. I'm now working with quick turn...

October 01, 2013 08:15 PM

Richard Hughes, ColorHug

Copyright in AppData files

More AppData news! I’ve been contacted by someone connected to Debian Legal, who apparently want me to add copyright information to the AppData files for licence compliance. Whilst most of the files are CC0 (basically public domain) it doesn’t seem super important, but it is technically required. If you ship an AppData file, I’d appreciate it if you could add <!-- Copyright 2013 First Lastname --> on the second line in the file. I’ve updated the AppData specification examples accordingly.

Additionally, the appdata-validate tool will warning you about missing copyright information if you use the –strict command line argument in the next release.

by hughsie at October 01, 2013 01:52 PM

September 30, 2013

Bunnie Studios

From Spark: Why Kickstarters are Always Delayed

Zach Supalla, Founder and CEO at Spark, wrote a frank introspective on why Kickstarters are always delayed. His thoughts are particularly germane, as he and his team are currently working hard to deliver on the Spark Core’s Kickstart campaign promise. They have taken an ultra-transparent approach to updating supporters on their progress, and their challenges — an approach that takes a lot of courage and thick skin.

You can read his thoughts here.

by bunnie at September 30, 2013 07:53 PM

Video Circuits

GEARS / Northlight Video 1973

 Some amazing early video art from bobvidpix on youtube, thanks to Peter for the link





















"GEARS - Computer Video Art - We made this ditty in 1973, you won't find much earlier combinations of online video mix of 3D computergraphics - especially as this was all done, effects etc. in one live pass. Ed Kammerer got a job at Adage, the first developer of realtime-controllable 3D CAD computers - each filling a midsized room. Like leaving kids in the candy store, they let us take over on weekends and nights to make computer/video art. We ran camera cables to two computer rooms, headset intercoms to each computer and camera operator in both of the rooms and then played with XYZ values and phases on the computers and target/beam on the camera and a mixer/keyer. The music was later custom created by Mark Styles. Credits/Tech - from memory, sorry for mental dropouts - Orville Dodson programmed and operated the AGT 130 Adage Graphic Terminal - Edwin Kammerer assisted and operated the other AGT. Charles Phillips and Andrika Donovan each ran camera and I mixed and tech'd. - We had two black & white Sony AVC 4600 cameras & CCU's and a Shintron SEG 366 switcher - recorded on a Sony EV 320 1" VTR.
The sales director for Adage, George White took this tape to show on a morning network TV show interview during the National Computer Graphic Assoc. tradeshow in NY - the host introduced the roll-in with, " Let's see how a computer works ... "

"The music was composed, performed & recorded by Mark Styles when he was at the Musician's Workshop on Clematis Ave., Waltham. I think it was created on the Arp 2600. After we recorded the multicam video of the computer graphics, Charles brought Mark a video player with this and another piece called 'Lisa' (from lisajous, the math that the computers used for the shapes) and he built the music around the flow of the visuals. I've always felt it was the perfect accompaniment."


"Adage Inc. created the first 3D graphic computer with realtime controls. These huge machines with a dozen racks of binary processors had knobs that controlled motion, dimension and other parameters of the 3D patterns programmed into it. Ed Kammerer worked there and brought us in to spend long sleepless weekends with a camera in each of the computer rooms and live mixing onto B&W reel to reel videotape. Cameras operated by Andrika Donovan and Charles Phillips, with Ed and Orville Dodson at the AGT Terminals, and me at the mix. This clip has the live intercom track, and it must have been made before the term 'crash' was coined - so 'watch out for imminent system collapse!'"

"the other difference, perhaps more significant than 16mm vs. video, is we approached these recordings as jam sessions. There was no set routine, such as a software demo, but instead computer and video operators played and improvised together for the visual mixes we made.

"thanks for your comment. Yes, cameras used for this lisajous and 'Gears' were 1" vidicon tube Sony AVC 4600's, live switched on a Shintron 360 & recorded on a 1" Sony EV 310 (pre-C-format). The computer screens we shot were (approx) 18" vector, oriented vertically, and we would position the cameras over the shoulder of the seated terminal operators, turn the room lights off and go. The cameras had CCU's with target and beam control, which enhanced the ghostly lagging typical of vidicon imagers."


by Chris (noreply@blogger.com) at September 30, 2013 05:15 AM

September 28, 2013

Bunnie Studios

Name that Ware, September 2013

The Ware for September 2013 is shown below.

Just barely made the month of September! It’s been a hectic month, and it looks like I’ll be struggling to keep up for the rest of the year. So many things I’d like to write about, but so little time!

[cat's out of the bag, so here's the other views of the ware, for your viewing pleasure]

by bunnie at September 28, 2013 03:26 PM

Winner, Name that Ware August 2013

The Ware for August 2013 was an APC Mobile Power Pack, model number UPB10. I bought it several years ago, apparently before they had efficient single-chip solutions for battery packs. One of the nice things about this pack is it provides a “true” 1.6A at 5V output, i.e. it can pump out that current until the battery is depleted, unlike cheaper packs which may be rated for about that much current but can only supply it for short bursts before the internal regulators overheat and shut down. The pack itself serviced me well for years, until the ultrasonic welds that held together the case-halves failed, spilling its guts, and giving us last month’s ware in that process.

Lots of people guessed it was a power pack, but Kevin was the first to call the exact model and make. Congrats, email me for your prize.

by bunnie at September 28, 2013 03:26 PM

Free Electrons

Crystalfontz boards support in Yocto

The Yocto 1.5 release is approaching and the Freescale layer trees are now frozen.
Free Electrons added support for the various Crystalfontz boards to that release as you can check on the OpenEmbedded metadata index.

Yocto Project

First some preparative work has been done in the meta-fsl-arm layer in order to add the required features to generate an image able to boot on the Crystalfontz boards:

  • Support for a newer version of the Barebox mainline, 2013.08.0. As the previously supported version of Barebox was too old, it didn’t include support for the Crystalfontz boards. Also, some work has been done to make the recipe itself more generic so that custom layers can reuse it more easily.
  • Inclusion of the patches allowing the imx-bootlets to boot Barebox. The imx-bootlets were only able to boot U-Boot or the Linux kernel until now.
  • Creation of a new image type, using the imx-bootlets, then Barebox to boot the Linux kernel. All the boards based on a Freescale mxs SoC (i.mx23 and i.mx28) will benefit of this new image type. This is actually the difficult part where you lay out the compiled binaries (bootloaders, kernel and root filesystem) in the final file that is an SD card image ready to be flashed.

Then, the recipes for the Crystalfontz boards have been added to the meta-fsl-arm-extra layer:

  • First the bootloaders, imx-bootlets and Barebox, including the specific patches and configurations for the Crystalfontz boards.
  • Then the kernel. The linux-cfa recipe uses the 3.10 based kernel available on github.
  • The machine configurations themselves, selecting Barebox as the bootloader and the correct kernel recipe. Also, these are choosing to install the kernel in the root filesystem instead of in its own partition.
  • Touchscreen calibration for the cfa-10057 and the cfa-10058 boards. This is required to get xinput-calibrator working properly as it can’t calibrate without starting values.

In a nut shell, you can now use the following commands to get a working image for your particular Crystalfontz board:

  • For your convenience, Freescale is providing a repo manifest to retrieve all the necessary git repositories. So first download and install repo:
    mkdir ~/bin
    curl https://dl-ssl.google.com/dl/googlesource/git-repo/repo > ~/bin/repo
    chmod a+x ~/bin/repo
    PATH=${PATH}:~/bin
  • We will work in a directory named fsl-community-bsp: