wolfspra1l | thanks for the headsup about the server | 06:23 |
---|---|---|
wolfspra1l | it's limping along :-) | 06:23 |
qi-bot | [commit] Werner Almesberger: lpc111x-isp/lpc111x.c: straighten *dialog*() API; radically simplify tracing (master) http://qi-hw.com/p/ben-blinkenlights/eda1135 | 07:01 |
qi-bot | [commit] Werner Almesberger: lpc111x-isp/lpc111x.c (identify): retrieve and print the chip's unique ID (master) http://qi-hw.com/p/ben-blinkenlights/505caf9 | 07:01 |
qi-bot | [commit] Werner Almesberger: libubb/swuart.c (swuart_open): don't call ubb_power (master) http://qi-hw.com/p/ben-blinkenlights/be82db0 | 07:01 |
qi-bot | [commit] Werner Almesberger: lpc111x-isp/lpc111x.c: new option -n to disable powering the device (master) http://qi-hw.com/p/ben-blinkenlights/b2f1310 | 07:01 |
qi-bot | [commit] Werner Almesberger: lpc111x-isp/lpc111x.c: read and dump (to stdout) the entire Flash (master) http://qi-hw.com/p/ben-blinkenlights/5246f5f | 07:01 |
wpwrak | whee, works again ! :) thanks ! | 07:01 |
xiangfu | yes. (en.qi-hardware.com) works fine here. | 07:09 |
wpwrak | wolfspra1l: btw, when you announce fpgtools more widely, people will want to know how to join the fun. do you plan to make a demo board ? or a reference design people can implement ? or select (a) 3rd party board(s) you recommend ? | 07:12 |
kyak | wpwrak: do you use PWM to control LED brightness with 8:10 card, or it is a simple on/off? | 07:52 |
wpwrak | hmm, you mean "is there a PWM one could use with UBB" ? | 07:56 |
wpwrak | and answer to that would be "no". all you have there is the MMC controller | 07:57 |
wpwrak | you can use it to send finite bit patterns (see ubb-vga) but it's not really a PWM | 07:57 |
wpwrak | depending on the accuracy you need, you can hack various alternatives | 07:58 |
wpwrak | e.g., you could do a software PWM in unprivileged user space if you don't mind the occasional scheduling interruption | 07:58 |
kyak | yep, we already had this discussion and just re-read the log :) | 07:59 |
wpwrak | or privileged user space (with real-time priority) if you don't mind the occasional interrupt | 07:59 |
wpwrak | or heavily nasty privileged user space (disabling interrupts) if you don't mind cache delays | 08:00 |
kyak | how about various real-time patches to linux kernel? would they help? | 08:00 |
wpwrak | or use the MMC controller to get rid of cache delays as well, but have an anomaly at block boundaries | 08:00 |
wpwrak | not sure. i didn't keep track of them | 08:02 |
wpwrak | ah, more options: use timers vs. bust looping | 08:02 |
wpwrak | s/bust/busy/ | 08:02 |
kyak | so, it turns to be a quite limited GPIO.. | 08:02 |
wpwrak | yes. if you need resource-friendly tight timing, you need to add some MCU | 08:03 |
wpwrak | like this critter: http://en.qi-hardware.com/wiki/File:Uart-inserted.jpg | 08:04 |
kyak | maybe one real use case i can think about is using it for keyboard backlight. Not a "back" light, but more like a lamp an night :) | 08:05 |
wpwrak | yeah, why not :) | 08:05 |
kyak | so, you use gpios to command mcu, and it in turn doesn't the real-time thing? | 08:06 |
kyak | well, basically, it's ben-wpan :) | 08:06 |
wpwrak | yeah | 08:07 |
wpwrak | but if your external stuff isn't timing-critical or if it is but you can cheat, then you don't need any of this | 08:08 |
wpwrak | ubb-vga is one example of cheating | 08:08 |
wpwrak | swuart is another one | 08:08 |
kyak | taking into account that we can change the firmware of ben-wpan, the board is already ready.. maybe we can even control motor with atben+some h-bridge | 08:08 |
wpwrak | you could probably even make a low- or even full-speed device or a USB host | 08:09 |
wpwrak | atben has no firmware. only atusb does, but that's not for the ben | 08:09 |
kyak | yep, so all applications where i don't need determinism, i can cheat | 08:09 |
wpwrak | all applications where you need precise timing only for a reasonably short interval and where you need rapid responses only within a reasonably short time interval | 08:11 |
kyak | how come atben has no firmware? i see there is some mcu :) | 08:11 |
wpwrak | that's the transceiver :) | 08:11 |
kyak | and you control the transceiver with gpios, right? | 08:11 |
wpwrak | yup. is talks SPI plus a few control signals with the ben | 08:11 |
kyak | ok... i see | 08:12 |
kyak | it means that we also have SPI :) | 08:12 |
wpwrak | see also: http://projects.qi-hardware.com/schhist/atben/pdf_atben.pdf | 08:12 |
wpwrak | SPI is very very easy :) | 08:12 |
kyak | if i have a temperature sensor that talks i2c, connecting it to 8:10 card would be a piece of cake, right? | 08:14 |
wpwrak | if you're looking for protocols, we also have the in-circuit programming protocols of: AVR (via avrdude), some PICs (wernermisc/bacon/prog/), silabs C8051F3xx series (f32xbase/f32x/), and soon NXP LPC1xxx (ben-blinkenlights/lpc111x-isp/) | 08:15 |
wpwrak | yup, i2c has no problematic timing constraints | 08:16 |
kyak | heh, it seems that you've exploited this MMC controller to the full :) | 08:16 |
wpwrak | the only things that suck are maximum response times. minimum response times are never a problem | 08:16 |
wpwrak | oh, all those just bit-bang | 08:16 |
wpwrak | the only thing where i used the MMC controller is ubb-vga | 08:16 |
wpwrak | i turn interrupts off in ubb-vga and swuart | 08:17 |
kyak | but when you use those 6 GPIOs, don't you use MMC controller? | 08:17 |
wpwrak | new, i just use them as gpios | 08:17 |
kyak | hm.. i thought these gpios are coming from mmc controller | 08:18 |
wpwrak | in f32x, i leave interrupts on but get real-time priority. that means that the programmer may occasionally miss the timing (that protocol has a maximum response time) but that doesn't happen very often | 08:19 |
wpwrak | each pin can be switched between several functions, e.g., gpio, interrupts, or some hardwired function block (mmc, spi, ...) | 08:20 |
wpwrak | well, almost each i/o pin | 08:20 |
wolfspra1l | regarding fpgatools, people need to come forward if they want something | 08:21 |
kyak | how do you decide which way to go - get rt priority and leave interrupts on, or to turn off interrupts? | 08:21 |
wolfspra1l | I will react then | 08:21 |
wolfspra1l | at this point the sw is in such alpha state that you couldn't run much on actual hardware | 08:22 |
wolfspra1l | I took a break the last week or so for year-end backups and cleanups and so on but will continue soon. already forgot what the next item was :-) I think make the blinking_led a little more flexible, then jtag-controllable counter and other goodies | 08:23 |
wpwrak | yeah, that was the next item on the list | 08:23 |
wpwrak | kyak: basically based on how precise things have to be and how bad it is if i miss the timing | 08:23 |
kyak | wpwrak: i see.. btw, can we disable interrupts that simple on x86? :) | 08:24 |
wpwrak | wolfspra1l: people will probably want to "follow the project" and have hardware that lets them run whatever new cool stuff happens. so if your experimentation platform is somewhat predictable, that would be the device of choice. | 08:25 |
wpwrak | kyak: just tell the interrupt controller to shut up ? ;) | 08:26 |
wpwrak | kyak: but i've never done that on x86. on x86, if i write code that may crash/hang the system, i tend to go straight for the kernel | 08:27 |
kyak | so that you could creash it much better :) | 08:28 |
wpwrak | well, in the kernel i have functions that manage resources for me, etc. | 08:29 |
wpwrak | on the ben, i can get away with just saying "timer 3 is MINE now". but i wouldn't dare that on x86 | 08:29 |
kyak | but why not? what's so different? | 08:30 |
kyak | you dont know which peripheal might be driven by that timer? | 08:30 |
wpwrak | yup. on x86 i have no idea what may grab timers and such | 08:31 |
wpwrak | on the ben, life is simpler and more stable | 08:31 |
wpwrak | and there are also hardware abstractions on x86 the kernel takes care of for me. on the ben, there's just one hardware | 08:32 |
wpwrak | so no guessing how interrupts may be routed, etc. | 08:33 |
kyak | x86 is so complicated, it always makes we wonder how real time are real-time solutions based on x86 | 08:33 |
kyak | there is a huge market for x86 simulators, for example | 08:34 |
wpwrak | x86 simulators ? | 08:34 |
kyak | and they are real-time, yes | 08:34 |
kyak | oh, i mean, simulators based on x86 | 08:35 |
kyak | if you ever heard of Opal-Rt, dSpace, xPC Target - these are all x86 | 08:36 |
kyak | some are running "red-hat linux with real-time patches", some are running win32-compatible RTOS | 08:37 |
wpwrak | well, for reasonably lax RT, a lot of things are possible | 08:37 |
wpwrak | for example. ubb-vga has to be accurate in the nanosecond range (each pixel is only about 18 ns). software would have a hard time doing that :) | 08:40 |
wpwrak | swuart is nicer. at 115200 bps, i have 8.7 us per bit. software can do that, though without interrupts and i use a timer to avoid drifting (due to cache delays) | 08:41 |
kyak | yep, everything below 1 us is usually offloaded to FPGA (http://www.opal-rt.com/electrical-system-overview) | 08:41 |
kyak | but all of these providers have no problems with capturing a high-frequency PWM or doing quadrature decoding in real-time | 08:42 |
wpwrak | a little fpga can go a long way when it comes to relaxing your timing | 08:45 |
kyak | anyway, thanks for the chat.. time to do some new year preparations :) | 08:46 |
wpwrak | ah, the booze shopping :) | 08:47 |
kyak | in fact, did that one week ago. It would've been a suicide to go to a store today and tomorrow :) | 09:09 |
Fallenou | morning | 09:10 |
DocScrutinizer05 | RT usually is defined as "predictable guaranteed maximum response delay to a certains set of defined external events" - this response time can as well be defined as: in the range of minutes | 19:41 |
DocScrutinizer05 | depending on the particular system | 19:41 |
DocScrutinizer05 | generally RT isn't about speed but about predictability and determinism | 19:42 |
pcercuei | yep | 19:43 |
whitequark | but then you discover that your maximum delay is not small enough... | 19:46 |
DocScrutinizer05 | following this definition, Linux-RT is not about a particularly speedy system, but about guaranteeing that a set of processes scheduled realtime will never take longer than X to get CPU and process an IRQ | 19:46 |
DocScrutinizer05 | otoh extreme speed requirements (like werber's VGA hack) might not have real hard RT requirements, if you tolerate occasional artifacts in your display | 19:47 |
DocScrutinizer05 | werner's* | 19:48 |
whitequark | DocScrutinizer05: I'd say that technically it does have hard RT requirement, as with some delays you'd lose sync | 19:48 |
DocScrutinizer05 | whitequark: no, if you'd for example miss out on 1% of pixels and display just a default black instead, but resync for next pixel, you'd get a pretty acceptable image yet the system clearly doesn't qualify for RT | 19:51 |
DocScrutinizer05 | basically the timing requirements for a VGA output are mostly unrelated to those specified for a RT system | 19:52 |
DocScrutinizer05 | e.g jitter isn't even a topic for RT | 19:53 |
whitequark | DocScrutinizer05: I was more thinking about this: if your system, let's say, features a garbage collector with unbounded maximum runtime, it is obviously not RT. And while for 99% of cases it would output the perfect signal, the fact that it *can* lose sync (which I deem a catastrophic failure here) makes it unsuitable for the task | 19:56 |
DocScrutinizer05 | yes, such a thing like a GC disqualifies the system for both RT and the wpwrak VGA hack | 19:58 |
wpwrak | very much so ;-) | 19:58 |
DocScrutinizer05 | still doesn't mean RT and VGA hack have any common denominator | 19:58 |
wpwrak | you have these parameters in all systems: what bounds you want to be met and the consequences of failure to meet them. | 19:59 |
kyak | wpwrak: would it be a catastrophic failure for your system if it misses a single deadline? | 19:59 |
whitequark | DocScrutinizer05: but don't requirements for the VGA hack (the need to output sync signals in time) qualify it to be an RT system? | 19:59 |
whitequark | DocScrutinizer05: I don't understand why not | 19:59 |
DocScrutinizer05 | what i'm saying is that for VGA hack a lot of RT specs apply as well, though in a way more strict way. While some others don't apply at all | 19:59 |
wpwrak | in the case of ubb-vga, failure isn't catastrophic. but of course, if it happens too often, people will dislike it. | 19:59 |
kyak | i don't like this word, but then your system is "soft real-time" | 20:00 |
wpwrak | i think many screens have something like a PLL. so if you get the timing wrong too often, the PLL will start to wander. | 20:00 |
wpwrak | DocScrutinizer05: yes, it's quite atypical RT. | 20:01 |
wpwrak | at least in the context of software RT | 20:01 |
wpwrak | not so much in the context of hardware RT. e.g., you don;t really expect a UART to jitter significantly | 20:01 |
DocScrutinizer05 | generally you expect hw to be "real time" | 20:02 |
whitequark | DocScrutinizer05: or, in other words, what I mean is that requirements for the vga hack is a superset of typical requirements for an RT system | 20:02 |
DocScrutinizer05 | exceptions frequently need very special notice | 20:02 |
DocScrutinizer05 | whitequark: nope, not all of them, since RT *never* allows any glitch | 20:03 |
wpwrak | every system glitches :) | 20:03 |
whitequark | DocScrutinizer05: I see | 20:04 |
DocScrutinizer05 | while. as explained above, you could output a complete black H-line on ubb-vga every now and then and it wouldn't matter too much | 20:04 |
wpwrak | the question is what happens then. 1) nobody even notices. 2) a few raised eyebrows. 3) a murder investigation. 4) etc. | 20:05 |
DocScrutinizer05 | the requirements for ubb-vga are simply different from the definition of RT, though admittedly quite a few of them can be found in RT defs | 20:05 |
whitequark | DocScrutinizer05: thanks for explanation | 20:06 |
DocScrutinizer05 | on a related topic: do you know why TV RF is specified as "stronger signal for darker pixel"? | 20:06 |
DocScrutinizer05 | random noise would usually add to the signal level from transmitter, thus causing dark spots instead of white spots | 20:07 |
DocScrutinizer05 | your eye will ignore those transient dark spots pretty much | 20:07 |
whitequark | DocScrutinizer05: but why does it add to the signal level? | 20:08 |
wpwrak | the eye's ability to remember photons is indeed quite remarkable | 20:08 |
DocScrutinizer05 | whitequark: because of two signals rather add than interfere to mutually eliminate | 20:11 |
DocScrutinizer05 | since for audio same physiological trick obviously doesn't work, they chosen AM for video and FM for audio signal of TV | 20:12 |
whitequark | by the way, what do you guys think about automatic reference counting as a memory management strategy? | 20:35 |
DocScrutinizer05 | common strategy, used e.g in Qt | 20:39 |
DocScrutinizer05 | combined with copy-on-modify for optimization of string handling | 20:40 |
whitequark | DocScrutinizer05: I think I should elaborate. I'm implementing a Ruby dialect for embedded development, right now. (I wrote an article: http://whitequark.org/blog/2012/12/06/a-language-for-embedded-developers/) | 20:41 |
whitequark | my strategy for memory management is currently as follows: | 20:41 |
whitequark | 1. perform region analysis and put all objects which lifetime does not exceed those of the current stack frame to the stack | 20:42 |
whitequark | 2. have a heap divided into fixed-size (16- or 32-byte) blocks to avoid fragmentation and have fast, constant-time allocation | 20:42 |
whitequark | 3. use automatic reference counting with write barriers inserted by the compiler for fast, constant-time deallocation | 20:42 |
whitequark | 4. use copy-on-write and ropes for strings and arrays | 20:43 |
DocScrutinizer05 | sounds ok'ish to me, but I wouldn't know too much to contribute anyway | 20:45 |
whitequark | oh ok | 20:45 |
whitequark | now, a bit about the other side of the coin | 20:45 |
DocScrutinizer05 | probably wpwrak has some better expertise and comments on such stuff | 20:46 |
whitequark | in a nutshell, I use Ruby for three things here: | 20:47 |
whitequark | 1. it compiles down to native code which executes on the target device | 20:47 |
whitequark | 2. it executes on host to generate other Ruby code. Think of it as a C preprocessor done right, or a simple and safe form of Lisp macro expansion | 20:48 |
whitequark | 3. it executes on host with target semantics, which is basically the same as constant folding performed by modern C compilers, but well-defined and more extensive. | 20:48 |
whitequark | (I think that C++11 with its well-defined constant folding semantics is close to what I want, but not entirely sure) | 20:49 |
whitequark | the expected benefit is to decouple intended semantics of your code from accidental semantics of, for example, C, and its way to handle the target and its quirks. | 20:50 |
kyak | whitequark: just wondering, how do you compile Ruby down to native code? And what do you exactly mean by "native code"? | 20:51 |
whitequark | would you, as embedded developers, want to use such language? if not, why? | 20:51 |
whitequark | kyak: compiling ruby to native code is easy. Obj-C is basically the exact same stuff | 20:51 |
whitequark | compiling ruby to _efficient_ machine code is much harder | 20:52 |
whitequark | I've added static typing, and type inference for you to avoid writing unnecessary code | 20:52 |
whitequark | kyak: "native code" here means that there is no interpreter and, additionally, you are not isolated from details of your hardware unless you opt to. | 20:53 |
kyak | ok, so you go directly from Ruby to machine code? How much would you have to change if you want to support another target? | 20:53 |
whitequark | kyak: well, not directly. I have my own SSA IR which resembles Ruby semantics (the IR itself is modelled after LLVM IR), and which I optimize, and then I convert it to the LLVM IR | 20:54 |
whitequark | (another target) it depends. CPU architectures are handled by LLVM, so adding one means I just need to find a decent C++ dev | 20:55 |
whitequark | board support packages, on the other hand, are written entirely in Ruby | 20:55 |
kyak | i see.. it's very interesting | 20:56 |
whitequark | obviously you need to adapt them across different families, but given how flexible Ruby is, it would be way, way simpler than in C | 20:56 |
whitequark | and you could also assign this task to Ruby programmers :) | 20:56 |
kyak | do i understand correctly, i have Ruby code, then i get the IR which is used by LLVM to convert to a specific machine code? | 20:57 |
whitequark | kyak: Ruby code -> (parsing, translating) -> Ruby IR -> (optimizing) -> Ruby IR -> (translating) -> LLVM IR -> (llvm) -> machine code | 20:58 |
kyak | i see.. How do you verify your machine code against Ruby code? I mean, this chain is error-prone | 20:59 |
whitequark | kyak: it is not inherently more error-prone than ones in GCC or LLVM Clang themselves | 21:00 |
whitequark | so the answer is, test coverage. | 21:00 |
whitequark | I don't do formal verification. It's not even possible for almost all real-world code. | 21:00 |
kyak | yeah, compiler is also adding possible errors | 21:00 |
whitequark | kyak: don't forget humans, who quite certainly add a lot of errors ;) | 21:02 |
whitequark | you probably won't use v1.0 to control your car. but that applies to any different compiler as well. | 21:03 |
kyak | what you are doing is very interesting. In fact, such approach is widely used in some tools. For example, MATLAB (being a high-level language of technical computing) and Simulink (being a tool for system-level modeling via block diagrams) can both be converted to C code (that's a bit different from your approach where you don't actually get the readable code) | 21:03 |
whitequark | I'm fine starting with TV remotes. | 21:03 |
whitequark | I could use LLVM C backend to generate C code | 21:04 |
whitequark | it would even make quite some sense, for this kind of auto-generated stuff | 21:04 |
whitequark | eg you could clearly see objects, theirs methods, lambda functions, etc | 21:04 |
kyak | it is also a general trend in last years to use higher level languages for embedded systems development (namely, the automatic C code generation) - because the systems are getting so complex | 21:05 |
wpwrak | the idea of GC in embedded code makes me feel somewhat uncomfortable | 21:05 |
viric | :) | 21:06 |
DocScrutinizer05 | I'm still using assembler ;-P | 21:06 |
viric | I don't think is that bad. | 21:07 |
viric | it's just a matter to write it well enough. | 21:07 |
DocScrutinizer05 | generally speaking I'd try to avoid resource allocation and freeing in embedded, if any possible | 21:07 |
hozer | how do you deal with real-time requirements and GC | 21:07 |
viric | wpwrak: see how many smartcards run java :) | 21:07 |
hozer | do those smartcards have a gc? | 21:07 |
viric | I don't see why not | 21:08 |
whitequark | wpwrak, DocScrutinizer05: I fully agree. It is well possible to write a program without GC with this approach, if you only use global data and stack-allocated temporaries (as it is often the case) | 21:08 |
kyak | i think it's a great exercise anyway, and definitely we will have to go to a higher level than C | 21:08 |
whitequark | hozer: they do have a GC. There are realtime GCs out there | 21:08 |
hozer | python is the way to go ;) | 21:08 |
wpwrak | hozer: you have two choices: 1) you leave room for worst-case GC. 2) you don't to GC ;-) | 21:08 |
viric | yes. | 21:08 |
viric | so easy. | 21:09 |
wpwrak | s/to/do/ | 21:09 |
hozer | I'll take option-2 for my engine controller please | 21:09 |
wpwrak | yeah | 21:09 |
viric | You can have real-time embedded systems with memory leaks, instead ;) | 21:09 |
wpwrak | whitequark: maybe just make it a fatal error to do anything that would require GC | 21:10 |
hozer | don't allocate memory | 21:10 |
wpwrak | in smaller embedded systems, you don't have resources to throw around anyway | 21:10 |
whitequark | wpwrak: yes, there would be a possibility to disable heap compile-time. I don't see why not. | 21:10 |
viric | hozer: if you don't allocate memory, you won't be running a gc | 21:10 |
whitequark | ARC doesn't have problems which GC's often have. | 21:10 |
hozer | if there's no memory allocation, there can be no memory leaks :P | 21:10 |
whitequark | it has predictable allocation and deallocation times, which are also fixed if the heap doesn't fragment. | 21:10 |
kyak | forget about the GC, think about higher level languages. | 21:11 |
whitequark | so I think it does suit a lot of embedded systems well. | 21:11 |
hozer | what is arc | 21:11 |
whitequark | hozer: automatic reference counting | 21:11 |
whitequark | kyak says a very important thing. there shouldn't be a reason why your register couldn't have a high-level representation | 21:11 |
whitequark | which is both a pleasure to work with (PLL.lock_at(16_000_000)) and compiles to machine code which is as efficient as if you'd do that in C. | 21:12 |
hozer | I like ARC, and a circular reference (that breaks ARC) should be a fatal exception and the thing turns off ;) | 21:12 |
hozer | because you are going to get memory corruption and some point, and the system should gracefully power off when that happens | 21:13 |
hozer | s/and/at/ | 21:13 |
whitequark | hozer: a generally accepted solution is to use weak references and/or include a mark&sweep GC in addition to ARC | 21:14 |
whitequark | but weak refs are quite heavy | 21:14 |
whitequark | so you either use mark&sweep GC if you don't have to care about realtime, or you look after yourself and break loops manually. | 21:14 |
wpwrak | considering that you'll typically in a memory-constrained context, you may want to have explicit allocation limits | 21:16 |
wpwrak | e.g., given objects of type A, B, and C, something like: A | 2*B | A+C | 21:17 |
wpwrak | so you either allocate an A and maybe a C too, or neither A nor C, but two B | 21:17 |
whitequark | wpwrak: at which point would I enforce this limit? | 21:17 |
wpwrak | you may optionally check for them | 21:18 |
viric | it's about static analysis | 21:18 |
viric | no? | 21:18 |
wpwrak | it's about compile-time allocation | 21:18 |
wpwrak | it would basically be the programmer telling the compiler what resource use is expected | 21:18 |
whitequark | wpwrak: well, CFA and DFA allow me to infer this information, sometimes | 21:19 |
wpwrak | it's up to the programmer to ensure this isn't violated, be it by checking in the code (and implementing a recovery strategy in case of a conflict), or by ensuring that, implicitly, this can't happen | 21:19 |
wpwrak | whitequark: of course, you may find that you're rapidly approaching C semantics with all this :) | 21:20 |
whitequark | wpwrak: ah, I see what you mean. Interesting approach. I have some aversion to techniques which require the programmer to ensure something isn't violated, but this is probably a result of writing too much Ruby | 21:20 |
wpwrak | you could add checks, but they would basically be of the type if (check_is_okay) do_it(); else panic(); | 21:21 |
whitequark | wpwrak: C semantics isn't all that bad. The parts which closely resemble and allow you to work directly with hardware resources are very useful | 21:21 |
whitequark | Inability to build any abstrctions around those parts is what's bad | 21:22 |
whitequark | the compiler is also too stupid sometimes, where it has no reason to. | 21:22 |
whitequark | for example, I do not understand why, in absence of mutually recursive functions (which are WRONG in embedded anyway), a compiler couldn't determine optimal stack depth at compile time by itself. | 21:23 |
wpwrak | in C (in embedded systems), you'd normally just have static allocations for such things. but of course, that could waste memory. | 21:23 |
whitequark | wpwrak: (such things) which? | 21:23 |
wpwrak | e.g., if you have two subsystems which each need some buffers, but they're not active at the same time | 21:24 |
viric | once the memory is physically there, and only for you, it is no waste. | 21:24 |
whitequark | wpwrak: ah, I understand what you mean. I'll think of possible solutions for this problem. | 21:24 |
whitequark | note that I do not enforce memory safety. Precisely nothing prevents you from allocating a region of bytes and then using whatever you want with it. | 21:25 |
whitequark | I only provide any guarantees if you use provided abstractions in well-defined way. Which is basically what C does as well. (Except that my *default* abstraction for strings prevents you from getting buffer overruns all over the place. You get the idea.) | 21:26 |
DocScrutinizer05 | (resources / allocation) I tend to define "static" variables, and for any re-use I simply use unions on same memory range, used in mutually exclusive program branches | 21:27 |
whitequark | wpwrak: (two subsystems) in fact this is probably best solved by stack allocation, yeah | 21:27 |
whitequark | DocScrutinizer05 uses what I've described before that | 21:27 |
DocScrutinizer05 | so no need to GC anything | 21:27 |
viric | C is not that bad if you take malloc and free out :) | 21:28 |
whitequark | a thing I'll also be able to do (and of which I'm quite proud that it is possible) is that you could execute code with target semantics on your host | 21:29 |
whitequark | which means: | 21:29 |
whitequark | 1) unit tests | 21:29 |
wpwrak | whitequark: some sort of stack. not necessarily the regular stack. | 21:29 |
GitHub76 | 01[13j1soc01] 15kristianpaul pushed 1 new commit to 06master: 02https://github.com/kristianpaul/j1soc/commit/95d55a016e8966e42bbd37954c5de3e6e5809b0f | 21:29 |
GitHub76 | 13j1soc/06master 1495d55a0 15Cristian Paul PeƱaranda Rojas: from RAMB16_S to RAMB16BWER, soc nows builds please check log | 21:29 |
whitequark | even better, unit tests where your mock peripherals can be written in regular Ruby, simplifying that a lot | 21:30 |
wpwrak | whitequark: e.g., you may have modes of operation but some common code as well. so you'll return to your main event handler or whatever, but you'd then switch modes. | 21:30 |
DocScrutinizer05 | it would be nice if any assembler/compiler would throw an error when a function of branch B is called in branch A where conflicting uses of a memory range would create colliding visibility of different cases of same range | 21:30 |
hozer | can you make this work so I can write perphirals in python too ;) | 21:30 |
DocScrutinizer05 | yay, I wonder if anybody could parse the above | 21:30 |
wpwrak | if an event appears that doesn't match the current mode, you'd ignore it, abandon the previous mode, etc. | 21:30 |
whitequark | wpwrak, DocScrutinizer05: well, with the stack allocation, the compiler would enforce that implicitly | 21:31 |
whitequark | with more complex system like modes, there probably isn't a way to verify this in compiler | 21:31 |
hozer | But what if this memory range is hardware registers (like say Infiniband verbs stuff) | 21:31 |
whitequark | (I suspect it can be proven that general case is equivalent to halting problem) | 21:31 |
wpwrak | whitequark: (target semantics on the host) just a question of writing the appropriate wrapper :) | 21:31 |
DocScrutinizer05 | whitequark: yup, for stack stuff is pretty simple | 21:32 |
whitequark | wpwrak: what if your target has 16-bit ints? things get pretty painful, and emulators often aren't what you want | 21:32 |
DocScrutinizer05 | in assembler however you tend to think of stack as a location to push registers and PC | 21:32 |
whitequark | hozer: (python) sorry, only Ruby. they're very similar, you won't have problems learning one if you know another one. | 21:32 |
wpwrak | whitequark: easy: don't use "int". use "int16_t" instead. | 21:32 |
whitequark | wpwrak: aaand what if your target has non-IEEE floating point semantics? :D | 21:33 |
whitequark | like ARM NEON | 21:33 |
hozer | whitequark: I've got python code that's been running for several years, I'd like to be able to just use it instead of rewriting it | 21:34 |
DocScrutinizer05 | \o/ NEON | 21:34 |
wpwrak | god created the integer. all else is heresy :) | 21:34 |
whitequark | hozer: I suspect that in this case, some form of IPC would suffice. I'd think about implementing that someday. It depends on exact application, through. | 21:34 |
DocScrutinizer05 | in another channel some guys investigating NEON vs genuine ARM since a few days. Results are not that encouraging | 21:35 |
hozer | couldn't use use the same pythong to llvm approach? Can swig make python<->ruby interfaces? | 21:35 |
hozer | s/pythong/python/ | 21:35 |
whitequark | hozer: hm, there are existing ruby<>python bridge in fact. yes, you could use that. | 21:36 |
whitequark | *there is | 21:36 |
whitequark | this is how github highlights syntax on the website. yeah, one EXTRA FAT interpreter uses another EXTRA FAT interpreter :D | 21:36 |
hozer | git highlights syntax using python running in ruby? | 21:37 |
whitequark | github. yes. | 21:37 |
whitequark | it seems that pygments is much better than any existing Ruby alternative. | 21:37 |
hozer | hilarous. So the question is how many bytes of object code does this fatness compile to after you run your magic ;) | 21:37 |
whitequark | hozer: I don't compile whatever runs on the host | 21:37 |
DocScrutinizer05 | [2012-12-30 18:35:23] <kerio> freemangordon: what's this new libpng? | 21:38 |
DocScrutinizer05 | [2012-12-30 18:35:25] <kerio> NEONized one? | 21:38 |
DocScrutinizer05 | [2012-12-30 18:35:43] <freemangordon> luf: why don't you test it, pngtest binary is here http://merlin1991.at/~freemangordon/libpng/ | 21:38 |
DocScrutinizer05 | [2012-12-30 18:35:46] <freemangordon> kerio: yes | 21:38 |
whitequark | hozer: there is no point to. well, you could use Rubinius (Ruby with LLVM backend), or JRuby (which is pretty awesome), but x86 hw is fast enough to use anything | 21:38 |
whitequark | wpwrak: you see, in my case there isn't even such a quetion. Everything is a method call. 5.0 + 10.0 is 5.0.+(10.0) | 21:39 |
hozer | well, I want to develop stuff for the embedded platform in python+ruby | 21:39 |
hozer | what I really want is something that outputs YASEP code ;) | 21:39 |
whitequark | wpwrak: for the target, it's defined as a plain primitive floating-point operation. for the host, you can write whatever code you'd want to emulate however weird the behavior of your target is. all completely transparently :) | 21:40 |
whitequark | hozer: port LLVM to codegen for it | 21:40 |
hozer | hopefully LLVM codegen is a lot easier than GCC codegen | 21:41 |
whitequark | YASEP is, to be sincere, somewhat fringe for me, but I don't see why underlying concepts couldn't work. I just don't expect it to become anyhow wirespread | 21:41 |
whitequark | hozer: writing for LLVM is a joy. except for the whole C++ part, but they use a subset of C++ which doesn't hurt your brain and isn't slow | 21:42 |
hozer | well, so far yasep is the cleanest open-source processor design I've run across. | 21:42 |
hozer | I want a synthesizable embedded cpu core, first for fpgas and eventually for homebrew ASIC | 21:43 |
whitequark | (homebrew ASIC) things got really cheap right now. I don't see why someone sufficiently motivated couldn't reproduce 4004 in their garage | 21:44 |
hozer | leon-sparc might work, but it's a bit heavyweight | 21:45 |
whitequark | it shouldn't be that hard to replicate top-notch 1960's tech in 2012. | 21:45 |
hozer | there's no way I'm ever going to produce a leon-sparc in my garage, at least for 5-10 years | 21:45 |
whitequark | leon-sparc absolutely | 21:46 |
whitequark | but 8051? why not? | 21:46 |
hozer | I'd prefer a cleanroom open-source design | 21:46 |
whitequark | well, by saying "8051" I mean the order of complexity, not a particular ISA or design | 21:46 |
whitequark | also, why not openrisc32? | 21:47 |
hozer | find me a git or mercurial repo of openrisc32, and I'll start trying to build an fpga bitstream later today :P | 21:47 |
hozer | svn + opencores.org | 21:48 |
hozer | opencores.org's obnoxious registration and clunky interface drove me away. | 21:48 |
whitequark | hozer: svn co http://opencores.org/ocsvn/openrisc/openrisc/trunk | 21:49 |
hozer | whitequark: I take that back. I could not find that link when I went looking for it | 21:52 |
whitequark | hozer: first link in google :) | 21:52 |
whitequark | http://opencores.org/or1k/OR1200_OpenRISC_Processor | 21:52 |
hozer | Can someone please confirm that http://www.latticesemi.com/dynamic/view_document.cfm?document_id=38780 is actually a DFSG-compliant bsd-style license? | 21:52 |
hozer | whitequark: have you ever registered with opencores.org | 21:53 |
whitequark | hozer: probably no | 21:54 |
whitequark | I don't see an entry in keepassx | 21:54 |
hozer | well, I wonder if they changed it. I tried to download some variation of the OR12k SOC and I had to register before I could get SVN access | 21:55 |
whitequark | oh. no idea about that | 21:55 |
whitequark | hozer: there are some interesting clauses in that license | 21:55 |
whitequark | namely, export restrictions | 21:55 |
whitequark | also you need to clearly identify the parts you've changed, but I think that lies within DFSG. IANAL, through. | 21:56 |
hozer | I guess that explains why milkymist doesn't incldue it | 21:56 |
hozer | I guess I'd rather spend time thinking about LLVM yasep codegen than think about export nonsense.k | 21:58 |
hozer | s/nonsense.k/nonsense/ | 21:58 |
whitequark | hozer: it's also LM8 | 21:58 |
whitequark | you probably wanted LM32. I'm not sure through. | 21:58 |
whitequark | (this is actually first time I ever see LM8...) | 21:59 |
hozer | oh yeah, and the or12k repo includes the whole damn kernel | 21:59 |
whitequark | also complete toolchain | 21:59 |
whitequark | and it's in this SVN abomination :/ | 21:59 |
hozer | lm32 appears to be at http://www.latticesemi.com/dynamic/index.cfm?fuseaction=view_documents&document_type=175&sloc=01-01-08-11-48&source=sidebar | 22:01 |
hozer | if I have to screw around with toolchains, I'd rather screw around with fpgatools ;) | 22:01 |
whitequark | it is also GPL, which doesn't have that export nonsense | 22:01 |
hozer | or12k is gpl? | 22:02 |
whitequark | hozer: LM32 is | 22:02 |
hozer | !! | 22:02 |
whitequark | OR is LGPL | 22:02 |
hozer | so where do I dowload the actually LM32 core then, from that link above? | 22:02 |
hozer | s/actually/actual/ | 22:03 |
Action: hozer gives up on checking out YATC (yet another toolchain) | 22:03 | |
whitequark | hozer: http://www.latticesemi.com/products/designsoftware/micodevelopmenttools/index.cfm | 22:03 |
whitequark | sooo http://www.latticesemi.com/dynamic/index.cfm?fuseaction=view_documents&document_type=65&sloc=01-01-07-20&source=sidebar | 22:04 |
whitequark | hozer: YATC? | 22:06 |
hozer | or12k+linux+gcc+gdb+etc.etc.etc | 22:06 |
hozer | LM8 looks a lot easier to deal with | 22:07 |
whitequark | depends on your task. | 22:08 |
whitequark | I won't probably ever use a 8-bit micro for a new real-world project. | 22:08 |
whitequark | wpwrak: btw, we've talked about this before | 22:10 |
whitequark | I rechecked, and all STM32 families are cheaper than equivalent ATmegas | 22:11 |
whitequark | often substantially (2x) | 22:11 |
whitequark | of course, you must not mind TQFP/QFN and 3V3. but that's all. | 22:12 |
hozer | minsoc requires an opencores.org account .. http://www.minsoc.com/1_0:configuration | 22:14 |
whitequark | minsoc? | 22:15 |
whitequark | oh I see | 22:16 |
hozer | once I gave up trying to make sure I could check everything into my own git/mercurial and just ran the setup script it seems kinda nice :P | 22:20 |
hozer | its downloading/building gcc/gdb for or32 now | 22:21 |
hozer | So how does the stm32 get to be so cheap? What will it take for an or32 or openrisc-minsoc to match the stm32 prices? | 22:22 |
wpwrak | whitequark: STM32 and atmega are in different performance classes. and yes, the high-end avr are crazily expensive. | 22:35 |
kristianpaul | hozer: had you build from scratch a compile for the LM8? | 22:59 |
kristianpaul | i just dint figured out where lattice have that source code.. perhas in the same micosytem rpm but not checked in depth | 22:59 |
--- Mon Dec 31 2012 | 00:00 |
Generated by irclog2html.py 2.9.2 by Marius Gedminas - find it at mg.pov.lt!