# #milkymist IRC log for Wednesday, 2011-04-27

kristianpaul kristianpaul Fallenou Fallenou: so in rtems i just need modify a pointer value to freely point memory isnt? or there are limitations about what i can point considering that the FS is also ram.. 01:46 it may sound stupid, but i need confirm 01:46 kristianpaul: look at how registers are read from and written to 06:07 you van use directly the address 06:07 beware of cache pb, volatile etc 06:07 lekernel, were you imagining a separate TLB for instruction and data buses? (seems best given the dual-ported design of the LM32) 08:12 yeah 08:33 aw: so it seems the new protection system works great 08:52 http://en.qi-hardware.com/wiki/Protection_of_Reversed_Polarity_on_DC_plug-in#Sch._D 08:53 yes, SCH D. 08:53 i am using the official adapter to record data again though... 08:53 the holdinf current seems that actually higher than any I did before. yup off course it must be. due to 2A fuse. 08:54 meanwile I am watching the temperature especially with our current adapter to see the surroundings temperature around DC jack. 08:55 this is most now I am checking though. :-) 08:56 you can see also the whole marked '2.85A' is my limited from lab. power supply...and it's output/capability won't drop too much when loading. I think that I need to have a 'burning run' for at least 1 week or do a ageing on adapter. 08:59 lekernel, i really forgot that 1A must be available for two host usb, thanks that  last email from you to remind me. 09:02 aw: why do you have 40mA going through the diodes at 5V? 09:12 lekernel, where? 09:13 table 1 non-reversed 5 4.994 4.992 0.04 / - 09:13 umm..it's no loads condition.(without MM1) 09:14 yes 09:14 there's another zener in the same series with a 5.6V voltage, maybe it's better to take that one 09:14 no. 09:15 with yours the minimum specified zener current is 4.85V, that might explain that 40mA current 09:15 s/zener current/zener voltage 09:15 when initially power up on NO LOAD circuit, the fuse is cold as before it goes into 'holding' stage. 09:16 you didn't get my point. the thing is that with a 5V voltage, your circuit should consume ZERO power. but instead of that you have 40mA through the diodes. 09:16 fuse can still have current flow even when it stays over a 'holding' value than current slowly rise to its 'cut/trip' current. 09:17 do we really want to have this protection circuit continually consume power and get hot? 09:18 hmm..i know you feel strange, later I'll measure again. :-) 5V is less than 5.1V. so you think there should no current. :-) 09:18 well your measurement is probably correct 09:18 the diode datasheet specifies that the zener voltage can be as low as 4.85V 09:19 i actually haven't not decided to if use this circuit now. 09:19 and we do not want that, so I'm suggesting that we take the 5.6V diode instead with a minimum voltage of 5.32V 09:19 when I saw/discovered those temperatures. 09:19 btw, now no matter if picking a 5.6V diode, I can imagine that temperature is stilll existed there though. this is worse than rc2. 09:20 it will still get hot **when the user exceeds the specified voltage** 09:21 well the true thing is this h/w batch is better than rc2 to have protection function. 09:21 but yes. 09:21 not when they use the recommended adapter 09:21 with your zener, it would get hot with the recommended adapter 09:21 getting hot when the user does something stupid isn't a problem 09:22 so i do really have not decided this though. I even think that I personally don't like this batch now. 09:22 I do. 2A fuse, 5.6V zener, done. 09:22 so I am trying to get how warm our adapter will up? 09:23 with the 5.6V zener there should be ZERO current and ZERO heating 09:23 well...good idea on 5.6V though. 09:24 usb spec needs 4.75~5.25V too. so 5.6V diode is over. that's why i picked 5.1V. 09:25 yeah I know 09:25 but I can try though to see how low it will be. :-) 09:25 but 5.6V for a short period shouldn't do much damage, and is definitely better than having 20V or so if the user is stupud 09:26 and there is still good protection for reversed polarity or AC adapters 09:26 but the true conditions on user we don't want them to use 20V adapter. 09:26 I know. but the whole point of this protection is to provide some security against human stupidity 09:27 and I insist on "some", as stupidity is infinite there can be no fully adequate protection 09:27 so like we declare that board "suggested input range: 4.75 V ~ ?V".. 09:28 yeah 09:28 no, we declare it as a *mandatory* input range 09:28 no change compared to rc2 09:28 but there is an additional safety belt if users do not listen to that 09:28 so it ends up doing less damage 09:28 well..wait I am supposedly .not provide a real conditional. :-) 09:29 5.6V is 1N5339BG 09:30 yup...that's why i said that I haven't no idea/ or decided that if we need this h/w patches...too unknown conditions could be happended. 09:30 just take that, do some quick testing and go ahead 09:30 umm..yes 09:30 ha...you really want to try that 5.6V even it's over 5.25V for usb? 09:31 the wanted result is that the board should a) have no regression b) incur less or no damage when fed inappropriate voltages 09:31 surely i can quickly go for it. 09:31 yes, definitely 09:31 as I said 09:31 5.6V on USB wouldn't damage much in most cases, and is still a lot better than whatever overvoltage an inappropriate adapter would give 09:32 imagine this situation: user plugs a 20V adapter to the M1 09:32 okay..imaginable though... 09:32 with USB devices on it 09:32 without the protection you get 20V on port, and this will probably break the USB devices 09:33 sorry that i am going to outside now... 09:33 talk yo later. 09:33 with the protection you get 5.6V or so for a dozen seconds, and this will probably NOT break the USB devices 09:33 later back to see this. cu 09:33 so. 2A fuse, 5.6V zener. 09:33 period :-p 09:34 cu 09:34 time to go..cu 09:34 lekernel, a thought: couldn't a LM32 TLB just work like CSRs work right now? 09:45 in 'kernel mode' there is no address translation 09:45 in 'user mode' you use a CAM lookup of these TLB registers for the appropriate page 09:46 and if there is a miss, a segfault exception is raised 09:46 and the OS has to fill in the missing page into some CSRs 09:46 we already have to save/restore 32 registers on context switch, so saving some 16-32 extra TLB entries doesn't seem like more more overhead 09:46 i guess the CSR namespace has been filled up too much with other CSRs, but a single new instruction 'WTLB' that behaves almost like the 'WCSR' should be enough to get the job done 09:48 terpstra: the kernel also needs to be able to copy to/from user space. better if it can use the TLB for this, instead of having figure out these things "manually". could be a one-shot switch, though. e.g., set a bit that makes the next access use the TLB, then switch back. 10:04 terpstra: another thing for the kernel: for vmalloc, you also want the MMU in kernel space 10:04 wpwrak, why does it need the tlb to copy to user space? it knows which page is at which address for the user-space, so it can just copy to the appropriate page's physical address 10:05 terpstra: yes, it's possible but messy 10:06 if i recall correctly, the linux kernel already has a function you are supposed to call when accessing user-space memory via a pointer as provided from user-spave 10:06 ie: if you get a pointer from user-land via an ioctl, you are supposed to convert it for use inside kernel space 10:07 terpstra: yup. you have these functions. as i said, you can do all this without mmu support, but it's a lot of overhead 10:07 not so much overhead as compared to reloading the TLB i'd wager... ? 10:08 terpstra: for example, if you copy a string byte by byte, you need to do a page table lookup and permission check for each access. messy. 10:08 what? we would you do that? 10:08 do it one lookup for the block transfer 10:08 terpstra: for larger accesses, you also have to check if you're crossing a page boundary 10:08 crossing page boundary, sure 10:09 but doing a single table lookup per page copied sounds like negligible overhead to me 10:09 terpstra: yes, if this is implemented as a block transfer. this isn't always the case. 10:09 (reloading the tlbs) why not have two ? one for user space and one for kernel space 10:10 area cost 10:10 is the cost prohibitively high ? 10:10 well TLB will need to be a fairly high associative cache 10:10 and we'll need one for each bus already 10:10 making kernel-mode need it too doubles the cost 10:11 (each bus) you mean instruction and data ? 10:11 yes 10:11 you probably don't need an I-TLB for the kernel. so the extra cost is only +50% ;) 10:12 for an FPGA we probably can't make it a fully associative cache like in a real CPU... as we don't want to use tons of registers, so we will need a 2- or 4-way associative TLB in order to use FPGA ram blocks 10:13 TLB is going to be really expensive in area i think 10:13 going to be slow too. :-/ 10:14 well, you could make a really simple TLB (e.g., one entry) and collect statistics :) 10:14 you need in sequence: RAM block indexing (based on low page id bits), then comparison of TLB tag to high page bits, a MUX to pick the correct entry in the associative cache, then comparison of TLB result to L1 cache tag for the physical tagging check, finally the signal has to trigger an exception 10:16 that's some deep signalling... 10:16 all this happens between two clock edges 10:17 yeah. well, you have to do this anyway, whether you have a kernel tlb or not. 10:18 yes 10:18 but kernel TLB just makes it even bigger ;) 10:18 ah, and you don't need the kernel tlb for kernel/user space access. you'd just reuse the user space tlb. what you need is a way to switch it on while in kernel mode. 10:19 maybe just one TLB 10:19 and have kernel mode bit enable access to a 'restricted' memory range 10:20 then you can happily re-use user-space pointers when copying to/from your kernel-land memory in the restricted range 10:20 not sure how badly you need vmalloc int the kernel. it's kinda frowned upon, not enough that people wouldn't use it ... 10:20 the restricted range doesn't go through TLB 10:20 think 1GB is enough memory for userland? ;) 10:21 that would be more or less equivalent to a 2GB/2GB split. yes, a possibility 10:21 or maybe: 2GB user-land, 1GB kernel land, 1GB memory mapped IO non-cached region 10:22 user mode cannot access addresses with high bit set 10:22 you're very generous with that address space :) 10:22 addresses with high bit set do not go through TLB 10:22 well, for a first version that'll do. can always be improved later. 10:22 unfortunately, my idea of a WTLB instruction won't work 10:25 since a TLB entry will need to be 40 bits wide 10:25 well, i guess it could be made to work if we have 256 TLB entries. *cackle* 10:26 <1 bit user/kernel> <19 bits virtual page number> <12 bits page offset> 10:27 why 40 bits ? 10:27 the 19 bits virtual page number = <13 bits TLB tag> <6 bits TLB index> 10:28 then your TLB entries have: <13 bits TLB tag> <19 bits physical address> 10:29 and it fits! 10:29 and only 32 TLB entries needed 10:29 (i was imagining a full 20-bits for virtual address and physical address) 10:29 this way you can pack it better, though 10:30 ah, regarding the split. it's not so nice, because you'd then have to check that user pointers are in the correct address range, along with overflow issues. probably still better to have a means to just switch the user mode for the next access. 10:30 you also need permission bits: read, write, and execute would be desirable, too 10:32 lies 10:32 we have two TLBs one for data and one for instruction 10:32 so execute means it is in the instruction TLB 10:32 i suppose read/write needs a bit, though for the data bus 10:32 very good. so just one for write. 10:32 yes 10:32 damn you 10:33 hehe :) 10:33 there be not enough bits ;) 10:33 should it be possible for a user to map device memory ? 10:34 i suppose this is useful especially for a micro kernel 10:34 hmm yes. that would be very nice to have. 10:34 so you need a full 20 bit physical address in the TLB 10:35 also for plain user space. think the old architecture of the X server. 10:35 or all my current atrocities surrounding UBB on the ben ;-) 10:35 so 20 bits for physical address, 1 bit for read/write flag..... 10:35 that means only 11 bits for the tag 10:35 i guess if you had 8 bits of TLB index (256 entries... eek) 10:36 that's too bgi 10:37 big* 10:37 or give up on fitting the TLB entry in 32 bits 10:37 or go for a bigger page size ;) 10:39 keep things easy - use 1 GB pages :) 10:40 8k page size would mean <19 bits physical address> and thus <12 bits virtual address tag> and only <6 bits for the TLB index> 10:40 so back to 32 TLB entries 10:41 that is nice 10:41 plus, that way you'll find all the programs that assume that a page is 4 kB :) 10:41 they've been fixed already i think 10:42 debian must run on stuff with 8k pages by now 10:42 afk 10:42 run or stumble :) well, you can try 8 k and if it sucks too much, go to 4 k 10:43 can't we just disable address translation in kernel mode? 11:04 this way we're also backward compatible with programs like RTEMS stuff that do not use the MMU 11:04 they just run in kernel mode all the time 11:04 lekernel, that's what i wanted to do too 11:08 but wpwrak says its a problem 11:08 so what do you think about just grabbing the entire TLB on context switch like we have to handle registers anyway? 11:09 it doesn't/shouldn't be so big as the L1 caches anyway 11:10 depends... how big is the TLB? 11:10 and how do we ensure compatibility with programs that do not use the MMU? 11:10 well, i also liked the idea that kernel mode = no MMU... then you have your compatability 11:11 I don't think there's a problem, Norman pointed out on the list that Microblaze does that 11:11 i've been reading around, and it seems that the TLB for mips isn't so big 11:11 even the AMD64 only has 1024 entries 11:11 so 32 should be fine i guess 11:11 probably 16 is already plenty 11:11 http://www.linux-mips.org/wiki/TLB 11:12 R2000 had 64 entries 11:12 R4000 had 32 to 64 11:13 (so later versions had less entries, which seems suggestive to me) 11:13 "TLB is organized as 3-way set associative." 11:14 hmm... 11:14 yeah, we definitely will need associativity 11:14 if we have only 32 entries, it can be fully associative, no? 11:14 i suppose we could try without at first tho 11:14 problem with fully associative is it rules out using RAM cells 11:15 you need full registers then 11:15 which is a lot 11:15 on my cyclon3 the LM32 needed only like 1k registers for the full design i think 11:16 we can also have no associativity and a lot of TLB entries to compensate 11:16 so we take advantage of the BRAM 11:16 i think for a first version this makes the most sense 11:16 but reloading the TLB would take time during context switches then... 11:16 however, i don't buy totally into the 2- and 4- way associative is like 2* and 4* bigger cache 11:17 though probably not a lot more than those architectures which flush the L1 caches on each context switches 11:17 there are many byzantine scenarios that can happen in practise where associativity is >>> more slots 11:17 yeah sure 11:17 as a general rule x-way associative has better performance than x times the size 11:18 but for a first version, i think non-associative makes sense 11:18 http://www.xilinx.com/support/documentation/application_notes/xapp203.pdf 11:19 non portable though 11:20 that's nice for you xilinx users 11:20 yeah... and xilinx patented the srl16 too 11:21 so basically one LUT can decode 4-bit index ? 11:21 that's possible on altera too 11:21 problem is that you can't reprogram the LUT at run time ;) 11:21 i guess this is the value added part of the xilinx approach? 11:22 ahh, yes, i see it now 11:22 SRL16E diagram 11:22 to mimic a SRL16E portably i would need 4 registers, and 3 LUTs i think 11:23 anyway 11:24 wpwrak, do you realllllly need the mmu in kernel mode? 11:25 terpstra: maybe the best approach is to implement a trivially simple TLB, run a test load (e.g., kernel compilation, emacs, whatever) and keep statistics of what happens. then pick a design accordingly. 11:25 we also need a way to determine the address that triggered a TLB miss 11:26 terpstra: (mmu in kernel mode) well, for vmalloc ... 11:26 wpwrak, why does vmalloc need an mmu? 11:26 can't it just allocate from the physical address space? 11:26 terpstra: well I think that having a large non-associative TLB in a block RAM is good for starters 11:27 terpstra: because it can give you virtually contiguous allocations even if your pages are all physically fragmented 11:27 Code that uses vmalloc is likely to get a chilly reception if submitted for inclusion in the kernel. If possible, you should work directly with individual pages rather than trying to smooth things over with vmalloc. 11:28 lol 11:28 s6 FPGAs have RAM blocks of up to 16 kilobits each... a few or even just one of them can hold a sizable amount of TLB entries 11:28 i don't think we need/want more than 32 TLB entries 11:28 by keeping the TLB small we can more easily just load/store it from the kernel instead of trying to preserve it like the L1 cache 11:28 terpstra: (chilly reception) for sure. yet it exists, so .. :) 11:29 terpstra: you mean for encoding the WTLB instruction? 11:29 terpstra: anyway, you can make the kernel tlb fairly inefficient. 11:29 I don't see what the problem is with a large TLB, except more context switch overhead 11:29 yeah 11:29 i don't want context switch overhead 11:29 either we need to leave stale TLB entries that get flushed on demand (more work for the hardware) 11:30 terpstra: ah, and i think modules may use the mmu too. so, i-tlb for the kernel as well. life sucks, doesn't it ? :) 11:30 or we need to save/restore more TLB entries on context switch 11:30 modules get loaded at different addresses 11:31 i don't think there's MMU action there 11:31 that's why it's a pain to find the symbol of a module from a kernel register dump 11:31 otoh a larger TLB means less TLB misses 11:31 well 11:31 I don't think it'd be hard to make the TLB size configurable with this approach 11:31 so we can just try and see :-) 11:31 it impacts the layout of the TLB tho 11:31 if you want to pack the TLB entries into 32 bits ;) 11:32 in a perfect world you could have 32 TLB entries, each 32 bit wide 11:32 then it would have a 'normal' LM32 register encoding 11:32 ie: a simple WTBL instruction would work just like WCSR does now 11:32 just give up on this LM32 stuff, use OpenRISC ;) 11:34 ... 11:34 We've already got this MMU stuff going 11:34 our kernel port is solid, too 11:34 hmmmmm 11:34 :) 11:34 terpstra: (i-tlb) you're right. doesn't actually run code from the vmalloc'ed region 11:34 one interesting experiment I want to do very soon is actually calculate overhead for TLB misses and reloading 11:34 and the effect TLB sizing and associativity has on that 11:34 juliusb, how does the openrisc do tlb ? 11:34 good question. the architecture is fairly flexible - allows various sizes and up to 4-way associativity 11:35 i'm not across the details of it specifically off the top of my head 11:35 physically tagged and indexed?\ 11:35 well,... 11:37 yeah, let's use openrisc. then the flickernoise framerate would drop to something like 0.2 fps while the FPGA LUT count increases :-) 11:37 no, I think virtually tagged 11:37 hangon no 11:37 lekernel: prove it :) 11:37 no I agree, or1200 aint so tiny 11:38 juliusb, to be honest i haven't fairly evaluated the openrisc 11:38 it is just so big 11:38 but, i'm serious about using it if you're considering doing a Linux port 11:38 but adding an mmu to the lm32 will make it big too 11:38 it's been like 2 years of work for us to just get the kernel port and toolchain to a point where it's usuable now 11:38 terpstra: I don't think that a simple TLB in a block RAM would make it very big 11:39 we have some good kernel developers now, and the HW seems quite stable across various technologies 11:39 my guess is something like 2 BRAM + 200 LUTs, not more 11:39 lekernel, the OR is only 6* bigger than the lm32 :) 11:39 lekernel: but as described before, you need a lot more than just a block ram, you need a tag ram and then all the appropriate error detection and exception handling logic 11:39 for each port 11:39 ... it would be an interesting experiment though 11:40 yes, juliusb is right that it will cost us 11:40 sure, that's what those 200 LUTs are for 11:40 .. hey by the way, why do you want to run Linux in the first place?? 11:40 cause i want debisn! 11:40 debian! 11:40 it's not a good idea for embedded stuff I argue - you have this MMU mess, and it only gets worse if you want shard library code 11:40 ;) 11:40 you need all that indirect function calling garbage 11:41 (for gsi/cern we don't want linux tbh) 11:41 it helps extensibility at the software level, but that's it right? 11:41 i am just interested from a hypothetical point of view 11:41 i think you sacrifice a lot of performance just to have the basic benefits of a GNU/Linux, namely the plethora of software out there 11:41 i agree with you 11:42 same here. i'm globally satisfied with RTEMS. 11:42 i think 2-way could be useful to avoid thrashing block copies. a dirty approach would be to have only one entry 2-way. basically if you evict a tlb entry, you move it to the 2nd way. 11:42 (that's for data) 11:42 software based on RTOS, however, is far more complicated to write and maintain than stuff that's POSIX compliant for Linux 11:42 wpwrak, that's what a victim cache has been for traditionally ;) 11:42 not that much 11:43 not sure what code would be most happy with 11:43 ...i mean more complicated to write and then port to a new design or architecture etc. 11:43 as a matter of fact, a lot of 3rd party POSIX stuff runs almost flawlessly on RTEMS 11:43 I have freetype, libpng, libjpeg, libgd, mupdf, ... 11:43 ya, I saw RTEMS is POSIX friendly 11:43 the main advantage of an mmu: fork() 11:43 that is very good 11:43 i think most of the rest can be dealt with 11:43 terpstra: aah, already invented. darn. 11:44 wpwrak, i didn't mean to invent it---i meant that's the functionality you gain from an mmu 11:44 you can't really do fork() without an mmu 11:44 but who is going to do the port of the kernel to LM32?? 11:45 or does it exist already? 11:45 there is a uclinux port afaik ? 11:45 oh good, 2.4 kernels are fun 11:45 :) 11:45 there's no such thing as far as I'm aware, it got merged with the mainline a long time ago, no? 11:46 i've not used it 11:46 i just know lattice claims this 11:46 terpstra: (invented) i meant the victim cache 11:46 wpwrak, ack 11:46 terpstra: there is a super crappy uclinux port by lattice, which larsc, mwalle, Takeshi and I have improved 11:47 it's still not merged upstream though 11:47 it's 2.6 or 2.4? 11:47 i've just looked, they've got a 2.6 version now 11:47 2.6... in fact we follow upstream 11:47 but there's MMU-less kernel now, right? and uClibc 11:47 yes 11:47 so if an mmu were added, not so hard to get 'proper' linux on it i guess? 11:48 what's the difference, then, between uClibc and real kernel? 11:48 err, uClinux and real kernel 11:48 they strip a lot of crap out of it? 11:48 I don't know. I have little knowledge about linux memory management internals 11:48 uclibc has nothing to do with mmu or not 11:48 uclibc is just a smaller version of libc 11:48 uclibc is under 200k compared to > 3MB for glibc 11:49 you usually see uclibc + busybox on embedded devices like routers/etc 11:49 where you have 8-32MB of RAM 11:49 those systems also have an MMU 11:49 i'm sure there's some NO_MMU stuff in uClibc 11:49 sure, to remove fork() ;) 11:49 Action: wpwrak crawls to bed and hopes for happy dreams of an mmu :) 11:49 you won't be getting fork() without an MMU 11:50 and that's why even embedded devices with linux have one 11:50 those cheapo little routers, kindles, android phones, etc --- they all have an MMU even when they have almost no memory 11:50 (tho the kindle actually has half a GB of ram) 11:51 sure, it's ASIC and probably the extra silicon required to put in a n MMU and reduced amount of softwareexecuted to do virtual memory management is worth it 11:52 yep 11:52 If you're really, really, stretched for area, maybe MMU-less makes sense 11:52 we should really see how much area a completely primitive mmu takes 11:52 if lekernel is right that it's 200 LUTs or less, then might as well have it on an FPGA too 11:52 in milkymist we're only using 44% of the fpga area, so a mmu would get merged provided it does not slow things down or introduce other regressions 11:53 I think you'll want all the performance you can get on FPGA running Linux and it would make a lot of sense to have one 11:53 We're so concerned about performance on Or1K linux that we're looknig at doing hardware page table lookups instead of handling misses 11:53 ... in software 11:53 it's really, really, slow 11:54 lm32 is very fast 11:54 i bet i could write a TLB replacement algorithm that ran in under 50 cycles 11:54 possibly even under 30 11:54 no, i'm not talking /MHz here, I'm talking overall performane because Linux is just a state-swapping machine 11:54 always loading and storing and accessing various process states 11:55 i see 11:55 terpstra: sure, but what about saving and configuring your state to get into the plcae were you can then do your TLB algorithm in 30 cycles?? 11:55 that's a good reason to make kernel-land not mmu mapped ? 11:55 I think it's a good reason to avoid Linux :) 11:56 juliusb, i was including the save/restore in those 30 estimate 11:56 if we added an mmu to the lm32 it would launch an exception handler where you do a quick LRU/heap operation and then an eret 11:56 terpstra: It's not so much but with a pissweak TLB you're doing it all the time (seriously, every new function call) and it adds up 11:56 hmm 11:57 juliusb: how many function calls are new? 11:57 you're talking about lazy linking, right? 11:57 I'm not sure exactly how it works but I'm pretty sure it occurs quite frequently 11:58 well, anything outside of the page 11:58 well, instruction and data, too, mind you 11:58 wouldn't the page with the 'got' stay in the TLB most of the time? 11:58 well, the TLB miss on each new function call just hits at application startup 11:58 hopefully the data TLB miss doesn't occurr so often 11:58 the code gets patched after that and no longer misses the TLB 11:58 lekernel, the code doesn't get patched -- the 'got' gets filled 11:58 I'm talking about statically linked programs here, I don't know about dynamicaly linked stuff 11:59 your function calls to global symbols go via the data bus 11:59 we don't have dynamic linking yet in our toolchain, but we're working on it, and it looks like extra headache for userspace execution 11:59 s/headache/overhead 11:59 yes, indirection is expensive 11:59 i'm somewhat skeptical that the TLB miss rate is so high 12:00 but, I'm contributing to this discussion because I'm going to be starting some work shortly on really gaugeing the overhead of TLBs 12:00 why would the mips folks move from 64 TLB entries to 48 if it is such a problem? 12:00 terpstra: yeah, you're probably right. but in either case, I don't think that lazy linking significantly increases any TLB miss rate. 12:01 and our feeling is, after playing with our port, is that TLB misses occur often, and a good way to increase time spent doing useful things, rather than management overhead, is minimising this 12:01 juliusb, fair enough. 12:01 your current tlb is how big? 12:01 64 12:01 we can have up to 128 12:01 and you still have lots of misses, eh? that's somewhat worrying. 2-way associative? 12:02 but is single way 12:02 ah 12:02 then i believe you 12:02 yes, I want to add ways 12:02 most TLB in 'real hardware' is CAN 12:02 CAN? 12:02 so fully associative 12:02 ah ok 12:02 sorry, CAM 12:03 i typo'd 12:03 2-way associative looks doable... lm32 does it for the caches 12:03 yes 12:03 ... or come and pimp out the OR1200's TLBs to do multi-way ;) 12:06 hmm 12:06 give me the or1k vs. lm32 sales pitch :) 12:06 well, I'm not the expert but I know the licensing on LM32 isn't pure BSD (has some taint from LM), whereas or1200 is all LGPL 12:09 true 12:09 i don't know LM32 architecture so well, but I think OR1K has pretty solid architecture, missing a few key things like atomic synchronisation instructions 12:09 but those can be added 12:09 OR1200 as an implementation is bad I think 12:09 I've been hacking on it for a few years and hopefully had made it better, but certainly it hasn't become leaner and more efficient 12:10 which diminishes your point about LGPL 12:10 our toolchain is good now 12:10 so your position is that the or1k + toolchain + kernel support is good, but the or1200 implementation is the bad part/ 12:10 our toolchain was a joke, but now it's good 12:10 yes, but it at least as MMUs already in there to save you working on that, but I think having a full on kernel port (we're giong to start pushing for acceptance in GCC and Linux sometime this year) is a pretty big deal 12:11 it's a lot of work to add all the bells and whistles 12:11 or1200 isn't bad, it's just not awesome 12:12 ... i may know of a rewrite in progress 12:12 ... but that's a little ways off yet 12:12 gcc/linux kernel: true. but as far as I'm concerned it is not my priority 12:12 binutils+gcc for lm32 is already in mainline 12:12 lekernel_: I understand you need as much performance as possible, but again I ask why even consider Linux when you need to be productive on almost every cycle, the pitch kind of isn't for that 12:12 so here the lm32 is further than the or1k 12:12 it's for anyone considering Linux 12:12 neither is the MMU, and I cannot accept the regressions that OR1K would introduce just to get some work already done on the MMU 12:13 ok, sure, but we will be sometime this week 12:13 terpstra: otoh the mainline lm32 gcc is often broken... it was somewhat acceptable in gcc 4.5 and was badly broken in 4.6 12:13 is there a good document for the or1k comparable to the lm32's archman pdf? 12:13 i'm saying as an open source CPU that has a working full on kernel port, I would consider or1200 12:13 maintaining gcc is a pain in the ass 12:14 yep, but we have guys doing that 12:14 terpstra: yes, we have recently re-worked the architecture spec 12:14 cleaned it up, etc 12:14 could you toss me a link? 12:14 i'd like to read it 12:14 http://opencores.org/download,or1k - click on the openrisc_arch_submit4.odt link 12:15 it's not in SVN yet I think 12:15 we've still got it out for review 12:15 but... it's on logincores.org (opencores.org I mean) 12:15 hehe 12:15 gotta register 12:15 juliusb: I'm not considering linux, except for demos and just the fun of it 12:15 juliusb: when are you going to change that policy? 12:16 i have an opencores account, not a problem, 12:16 i just had lunch with the guy in charge here, he's not convinced 12:16 I tried 12:16 he argues that what's the big deal - you're getting access to stuff for free, give us some information so we can provide to advertisers who comes here so we can fund the webserver 12:16 juliusb: never discuss too much with stupid people. work around them. 12:17 hehe 12:17 well, there's already a fork happening: openrisc.net 12:17 they got fedup with opencores 12:17 another irritating thing in opencores policy is the requirement that files be uploaded on your server. which in turns mandates the use of SVN and your web interface, both being a lot inferior than e.g. git and github 12:17 ohwr.org 12:17 sure, I think they're fighting a losing battle 12:17 ohh nice, ohwr.org 12:17 (that's where my stuff lives) 12:18 cool, thanks 12:18 anyway, this is an ongoing thing with OpenCores - they still  don't see, even after talking a lot with them, why they can't take a little if they give a little 12:18 and btw I can't see why running such a webserver would be so expensive 12:19 I'm at least trying to get them to dump the forums and bugtracker (both some custom hack they got this young guy to do) and use a mailinglist and bugzilla 12:19 ya, well, it shouldn't be, but it is if you go about the wrong way for 3 years 12:19 I think their heart is in the right place - they didn't want OpenCores to die and thought they could make it great 12:19 but I think they're not so open-sourcey 12:19 i probably shouldn't be saying this :P 12:19 anyway 12:20 it's in flux, I hope, and things will change eventually 12:20 meh - until someone writes an opensource hdl toolchain, we don't reallllly have 'opencores' anyway 12:20 well, you're among friends. I'd even dare say you've just joined the #opencores-haters channel *g* 12:20 i know well with the guy who started openrisc.net and it'll be interesting to see the response they have 12:20 hehe sure, and I'm working hard on OpenRISC and just like to see others getting into the oshw stuff, too 12:21 terpstra: this is under way :p 12:21 i come in peace, but I'm employed by ORSoC and feel I should at least try to provide them with good advice on OpenCores 12:21 juliusb, i don't hate opencores. i hate the blinky flash adds. ;) 12:22 but, anyway, just wnated to point out if you really want Linux on an open source CPU, try Or1K 12:22 I think there's some tuning to be done, like anything, but it's probably a good place to start 12:22 juliusb, i will read the arch manual and then form a more informed opinion :) 12:22 i expect nothing less :) 12:23 but I, too, am very interested in the fully open source toolchain for HDL synthesis and backend 12:23 hence popping in here the other day to ask lekernel_ about his work so far 12:24 he, it's coming :) 12:25 juliusb, or1k has a branch delay slot? 12:25 wanna help? 12:25 wasn't this proven to be a bad idea by mips? 12:25 learn from the past! ;) 12:25 why is it a bad idea? 12:25 architecture is initially from 1999 12:25 fwiw microblaze has it, and from studies I've read it does provide a performance advantage 12:25 "The most serious drawback to delayed branches is the additional control complexity they entail. If the delay slot instruction takes an exception, the processor has to be restarted on the branch, rather than that next instruction. Exceptions now have essentially two addresses, the exception address and the restart address, and generating and distinguishing between the two correctly in all cases has been a source of bugs for later designs." 12:25 precisely 12:26 i'm dealing with this now, actually 12:26 what is in fact a bad idea is have several delay slots 12:26 just one is still reasonable 12:26 http://en.wikipedia.org/wiki/Classic_RISC_pipeline -- scroll down to the area where they list the reasons 12:26 that reason is just the most pertinent i think 12:26 well, I think the control overhead of having one compared to none is far more than from having one compared to two 12:26 ok... 12:26 it's a hassle for out of order etc 12:27 well a lot of features make a mess from exceptions. out of order execution being most infamous for that. 12:27 but if you want a simple design, then yeah it's probably better not to have the delay slot 12:27 that sounds about right, but it just adds a little bit of extra complexity where you don't want anything extra 12:27 it does increase performance, so it's a trade of 12:28 lekernel, it increases performance only if the compiler can find a good instruction to put there 12:28 which at the end of a basic block usually means putting a 'write to memory' 12:28 but pipelines taht run really fast now are very long 12:28 yes. but from the paper I've read it still works 12:28 but those are precisely the instructions which generate faults 12:28 part of the idea was to offload complexity into the compiler from the HW, as the HW development wasn't so advanced right? 12:29 but now it just makes things more complicated at the HW level 12:29 sure 12:29 i am a firm believer in simpler cores, but many corse 12:30 and compilers are actually fairly clever now, so I guess that's not an issue, but why cause the HW to be more complex when really there's marginal benefit 12:30 we've carried the hardware supporting crappy sequential software about as far as it can go 12:30 well... if you have OOO execution, delay slots sure make no sense 12:30 yes, as someone who writes, tests and debugs cores, I would eliminate the delay slow 12:30 slot 12:31 but I wouldn't toss it as a definitely crappy idea either 12:31 fair enough 12:31 I think it still does some good in some cases. 12:31 for OR2K, we propose eliminating them http://opencores.org/or2k/OR2K:Community_Portal 12:31 i agree it is a nice way to avoid the wasted instructions you otherwise have 12:31 yes,for the simple 4/5 stage pipelines, they do gain you some advantage compared to not, there 12:32 yup 12:32 juliusb: do you want to help with the synthesis toolchains? 12:33 (speaking about delay slots: for the OR2K, sure, eliminate them) 12:34 lekernel: probably not right at the moment, sorry, I was just curious to see how it was looking 12:35 perhaps in a while, though 12:35 I think it's definitely needed and would be very cool 12:35 there are some relatively simple things to do, like implementing Verilog case statements 12:35 mainly i'd be interested to see an open source synthesis engine 12:35 to check the impact of various design choices 12:35 (all it's needed is translate those statements to IR muxes) 12:35 at least for now, then we'll see how to do things like FSM extraction 12:36 cool, if I get some time i'll let you know, will find out how to get started 12:38 ok. just ask here or on llhdl@lists.milkymist.org if you have questions or problems. 12:39 will do 12:40 btw, I was a bit stuck lately with the placement engine 12:41 I wanted to do post placement packing, but this is rather hard especially with the current chip database architecture 12:41 so I think i'll revert to good old pre-placement packing heuristics for now 12:42 not sure how good it's going to work with the relatively complex s6 slices, but we'll see 12:42 maybe it works great 12:42 as a matter of fact, I think Altera has even more complex logic blocks ("LAB clusters" or something)... and it's not clear how they pack them 12:43 also, with post placement packing, I'd lose one of the potential benefits of clustering, which is that the placer algorithm can be faster because it has to deal with fewer elements 12:44 so perhaps it's simply a bad idea after all 12:44 lekernel, why does an LM32 dcache read (lw instruction) take 3 cycles for result? X stage calculates address, M stage touches cache.... what happens in W stage? 13:06 write to register file? 13:07 mh 13:07 I don't know 13:08 but at the end of the M stage it could have used the bypass 13:08 just like the 2-stage shift instruction does 13:08 hrm 13:09 there's an "align" step in the block diagram 13:10 so D fetches base register, X adds offset, M fetches the cache, and W 'aligns' the result (and writes back to register file at end of cycle) 13:10 what is this magical align? 13:11 I guess this is for reading bytes or 16-bit words on any offset 13:11 ahhh 13:12 and sign extension / etc 13:12 makes sense 13:12 yes 13:12 thanks\ 13:13 hi xiangfu 14:29 hi 14:29 +-***** 14:32 hi guyzmo 14:42 hey :) 14:45 sorry, was plugging in stuff 14:45 damn, so sad rlwrap can't work over flterm :/ 14:52 (and all control characters just output garbage) 14:53 hum 15:03 can't get the led par to lighten up :/ 15:04 did you try it in flickernoise? 15:28 not yet 15:33 of course I'm gonna try it 15:33 control panel -> dmx -> dmx table (called "dmx desk" if you have upgraded, but I don't want to be negative here, but I'd tend to bet you did not) 15:34 ok 15:35 fortunately the dmx desk works with all released versions :-) 15:35 ;) 15:35 damn, why did I forget my DMX cable :-S 15:49 Fallenou: (registers) like in the drivers and sys_conf.h? 16:15 oh, yes i think 16:20 :p 16:20 grmbl 18:37 none of my XLR cables work with DMX signal 18:37 though I remember we had one of them working 18:38 I will have to get one cable from the Gaîté Lyrique tomorrow 18:40 http://colossus.cs.rpi.edu/~azonenberg/papers/litho1.pdf 21:36 http://siliconexposed.blogspot.com/ 21:37 "Since writing it I've made features at 5 micron half-pitch using the camera-port method, and am about to buy a 1-watt 385nm LED as an exposure source. This is way more power than I need so I will be able to use a nice thick diffuser on it. Once the exposure lamp is fixed I should be able to make 75 \lambda square dies at 5 micron resolution using the 40x objective, or 20 micron using the 10x." 22:14 http://i.imgur.com/DR6O9.jpg 22:19 http://i.imgur.com/s9RMP.jpg 22:19 lekernel: the first one looks like a forest ;-) 22:27 hi azonenberg 22:28 hi 22:28 welcome, honored to see you here :) 22:28 i'm sebastien 22:28 ah, k 22:28 Lets move our discussion to here rather than fb chat so other people can see 22:28 The paper i sent you only describes my work at the 15um node 22:29 ok :) 22:29 Though i did outline the process that I later reached 5um at 22:29 how do you engrave through the silicon? 22:30 I plan to open the project as much as possible btw, all tools etc will be released under an open license (probably BSD or similar) 22:30 excellent :) 22:30 Read the FB note (which i need to post publicly somewhere) 22:30 Long story short, apply hardmask (probably Ta2O5) to the silicon by spin coating and heat treatment 22:30 Spin coat photoresist over that 22:31 expose and develop 22:31 i see 22:31 Etch hardmask with 2% HF (Whink rust remover, same stuff jeri uses for gate oxide) 22:31 Then etch the silicon using 30% KOH / 15% IPA / 55% water at ~80C 22:31 sorry about the dumb question, i'm still going through the pile of material and links on your website and fb :) 22:31 You cant use KOH directly because it will attack the resist 22:31 Lol, no questions are dumb 22:32 For the record i have no formal training in EE myself :P 22:32 my BS (and PhD in a few years) will be in comp sci 22:32 Anyway so the nice thing about KOH is that its very anisotropic 22:32 FeCl3 and similar etchants for copper, if you've ever done home PCB fab, are isotropic - they eat equally in all directions 22:33 So you get rounded sidewalls and such 22:33 But KOH eats along the <100> crystal plane nearly 100x faster than <111> 22:33 And <110> is a hair slower than <100> but not by too much 22:33 cool. I talked about this to a fab employee, and he told me I'd never get any good anisotropic etchant because they are super expensive, hard to buy, etc. 22:33 if it's just KOH, well... :) 22:33 If you get <110> you can go straight down (assuming your features are parallel to the <111> plane> 22:34 h/o let me send you a paper 22:34 "Fabrication of very smooth walls and bottoms of silicon microchannels for heat dissipation of semiconductor devices" 22:34 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V44-40D0MGJ-3&_user=659639&_coverDate=06%2F30%2F2000&_rdoc=1&_fmt=high&_orig=gateway&_origin=gateway&_sort=d&_docanchor&view=c&_searchStrId=1730655729&_rerunOrigin=google&_acct=C000035878&_version=1&_urlVersion=0&_userid=659639&md5=c27c2d506cd9137d148a914c8bde1407&searchtype=a 22:34 Look at the figure they have in there (fig 9 i think?) - 400 micron deep etch with almost vertical sidewalls 22:34 if i didnt know better i'd say it was made with RIE 22:34 that's what i used as the starting point for the comb drive process i have on fbook 22:35 Action: lekernel warms up his university proxy to get through the cretinous sciencedirect paywall 22:35 Lol 22:36 I have an openvpn server running at a friend's house 22:36 the machine in my office on campus, and my laptop here, tunnel into it 22:36 then the office machine advertises routes to most journal websites ;) 22:36 that's more sophisticated than I do... I use ssh redirect and /etc/hosts 22:37 I run OSPF http://pastebin.com/Tn4T2e8k 22:39 .11 is the vpn addy of my box on campus lol 22:39 hm... can't reach any server at uni tonight 22:40 Action: azonenberg mirrors 22:40 have you done multilayer yet? 22:41 I havent done any etching yet since i cant afford the materials until my next payday lol 22:42 Look at the date on the paper 22:42 i only got litho working reliably last week 22:42 yeah, saw it :) 22:42 this was an unsolved problem for months 22:42 man that's awesome work 22:42 cant belive how simple the solution turned out to be lol 22:42 best hack i've seen lately :-) 22:42 o/ 22:44 lekernel: http://colossus.cs.rpi.edu/~azonenberg/mirror/smoothwalls.pdf 22:45 thanks 22:45 do you think you can etch vertically like this in e.g. SiO2? 22:46 lekernel: Why not? 22:46 I can buy KOH for 4 a pound 22:46 I don't know... since you are relying on the crystal structure 22:46 what happens when you grow oxide on a wafer? do you have a neat crystal structure or a messy one? 22:47 First off, i will be buying wafers aligned to <110> 22:47 Probably these http://www.mtixtl.com/sisinglecrystalsubstrate110orn10x10x05mm1spundoped.aspx 22:47 they arent technically wafers as they arent round, but <110> is hard to find in full wafers for decent prices 22:47 And i will not be growing oxide, also 22:48 iirc they used Si3N4 deposited by LPCVD as a hardmask, but i dont have CVD capabilities 22:48 So i'll be spin coating this stuff http://emulsitone.com/taf.html 22:48 so you want to focus on MEMS? 22:48 After heat treating it forms Ta2O5, which is pretty easy to etch with HF 22:48 growing oxide is mandatory for most transistors (afaik) 22:49 But it's resistant to alkaline etches 22:49 Dielectric is, it need not be SiO2 22:49 tantalum pentoxide was actually considered as a high-K dielectric for DRAM a while back - it would work 22:49 But emulsitone also sells a SiO2 coating solution 22:49 And, more importantly, i plan to buy a furnace i can do thermal oxidation in 22:50 I just dont have1200 to spare yet 22:50 i can do bulk micromachining for much less (500 or so) 22:50 Including all of the consumables 22:50 CMOS is definitely on the to-do list but its down the road 22:50 do you know about this? http://visual6502.org/ 22:51 among other things because transistors are so sensitive to trace metal contamination whereas MEMS are less so 22:51 Yep 22:51 there are also the 4004 masks published by Intel for you to chew on :-) 22:51 I do reversing too 22:51 Lol, um 22:51 less transistors than the 6502 22:51 you *do* know that one of my dreams has been to make a 1:1 scale model of the 4004? 22:51 fully functional 22:52 haha :) 22:52 But like i said mems is easier so that comes first 22:52 no need for doping or tons of masks, the process i'm looking at only needs three masks and only one even somewhat precise alignment step 22:52 the first mask is contact litho at \lambda = 200um lol 22:52 just thinning the wafer in the middle and leaving a thick rim around the edge for handling 22:53 then the through-wafer etch for the fingers followed by metal 1 22:53 though, as you saw in the paper, getting sub-5um alignment will be pretty easy 22:55 another thing that could potentially be interesting is MMIC's 22:55 ? 22:55 microwave ICs 22:55 those are a pain to buy 22:55 oh... Those will be trickier - tighter tolerances 22:55 do you think so? 22:55 maybe the transistors are 22:55 Once i get the basic process working i'll see where it goes lol 22:55 but a big MMIC advantage is in the ability to print microstrip lines with more precision than on a PCB 22:56 Good point 22:56 Actually, funny thing - i was thinking of making a hybrid of PCB and IC technology at some point to do massively multilayer boards 22:56 I actually do not know how to build a good microwave transistor 22:56 Start with dual layer FR4 with copper on both sides 22:57 but it does seem to use very nasty chemicals like germane gas 22:57 Pattern your metal 1 and 2 (for power distribution) 22:57 lay down oxide on top of M2 22:57 sputter or evaporate a micron or so of Al or Cu, etch M3 22:57 rinse and repeat lol 22:57 germane is one of the few chemicals I dare not touch, close to sarin gas and the like 22:58 What about concentrated HF? 22:58 or SiH4? 22:58 I draw the line at 2% HF myself lol 22:58 HF is still a lot less dangerous than germane 22:58 even concentrated HF 22:58 Phosgene? 22:58 They use that for ion implantation 22:58 Arsine too 22:59 Neither of those are healthy to be around 22:59 My process will be diffusion based using spin on dopants though 22:59 Less precise but safer and requires less fancy equipment 22:59 just HF wet etch the doped oxide film, coat undoped oxide around it, and heat for a while 22:59 According to wiki, GeH4 is used for CVD epitaxy in a similar manner to SiH4 23:01 So that means they're using germanium based substrates 23:01 ok 23:02 so no CVD etc.? 23:02 Nope 23:02 I'm ranking processes in order of preferenace 23:03 what about metal layers? how can you do them without PVD? 23:03 Spin coating is pretty much impossible to avoid and easy to do (though precise coating thickness control will be a bit tricky until i get a speed controller) 23:03 Metalization will be done by filament evaporation or DC sputtering 23:03 I'm exploring both in parallel and whichever one starts working first is the one i'll use 23:03 though eventually i want both 23:04 Thermal diffusion id going to be necessary for CMOS but not MEMS 23:04 is* 23:04 or at least, not the comb drive 23:04 heard of this? http://www.gdiy.com/projects/thin-film-sputtering-machine/index.php 23:05 No, actually, I havent 23:05 But i do have a friend doing research in sputtering 23:05 there you can get your metal layers :-) 23:05 Metalization was my second area to focus on after litho 23:05 To be done in parallel with etching 23:06 I really havent studied it in nearly as much depth lol 23:07 at electrolab (a hackspace near Paris) someone got their hands on a couple of turbopumps. we haven't used them yet, though. 23:10 I was actually thinking about doing the sputtering first 23:10 Nice 23:10 I was planning to do thermal evaporaition initially, actually, since i thouhgt it would be easier 23:11 yeah, maybe I'll start with that too :) 23:11 but if you get sputtering working I might send you guys a few dies to metalize lol 23:11 the tricky thing with sputtering is gonna be doing it *cheaply* 23:12 For3.5K - $5K you can buy a small sputtering rig from MTI or similar 23:12 my #1 problem is time (and then money to build such expensive stuff). i'm doing too much stuff ... 23:12 Homebrewing cheaper is not going to be easy 23:12 But evaporation looks like it will be a lot easier to do cheaply 23:12 yeah probably 23:12 You need a high current, precisely controlled power supply (may be possible to adapt one designed for welding, i may build one for the low-power ~100W prototype) 23:13 with a little effort we can also probably get an old evaporator from the 70s too 23:13 A 2-stage rotary vane vacuum pump will get me down to ~40 mtorr, i dont know if thats deep enough 23:13 Ted Pella will sell tungsten boats, filaments, etc for a decent price 23:13 we merely need to rent a van and drive it on some 600km to pick the evaporator up :) 23:13 As with wire / pellet charges for evaporation 23:13 but again there are time problems 23:13 I projected (given the pump and vacuum gauge i am thinking of borrowing from a friend) that building a working evaporator would cost ~$1.5K 23:14 maybe only $1K 23:14 http://paillard.claude.free.fr/ is very cool too 23:14 that guy built his vacuum pumps himself 23:15 including a molecular one 23:15 Nice, but i dont know french :( 23:15 unfortunately he's stopped doing this 23:15 And i dont plan to build a pump since i can get access to one 23:16 Or, at least a roughing pump 23:16 if high-vac turns out to be necessary i may try my hand at makign a diffusion pump 23:16 sure. but vacuum pumps are otherwise expensive like hell, so it's good if there is a DIY alternative 23:16 unitednuclear sells a 2-stage rotary vane roughing pump for$295 23:17 i cant imagine DIYing one for less 23:17 in fact, vacuum anything is expensive like hell, even when it clearly needs not to be 23:17 Yeah 23:17 But i am not really focusing on vacuum too much yet 23:17 I'm designing processes in the order that i'd use 'em 23:17 and next after spin coating and exposure is etching 23:17 that guy http://benkrasnow.blogspot.com/2011/03/diy-scanning-electron-microscope.html uses spark plugs as voltage feedthrough 23:18 Yeah, i saw that one 23:18 those otherwise cost around 100-200¬ or so at a professional vacuum equipment manufacturer 23:18 Not bad at all 23:18 rotary vane pumps aren't the worst... the main problem is turbomolecular pumps which are around \$8000 23:20 Turbopumps are not cheap, that's for sure 23:21 and also seem to be easily damaged if for example your vacuum is suddenly broken with the pump running 23:21 But do you really think you can build one? 23:21 And yes, that will kill them 23:21 well, apparently Claude Paillard did something like that 23:21 Impressive 23:22 yeah :) 23:22 his work is amazing 23:22 But the question i'm asking right now is, how high vacuum is needed for basic evaporation? 23:22 unfortunately he did not publish all the details and he's no longer into that 23:22 If I purge the chamber with argon or something to remove any traces of oxygen 23:22 then pump down to 40 microns vacuum 23:22 will that be adequate? 23:22 that's what I'm thinking too. but why is it that no professional installation does that? 23:22 I mean, i've seen DC sputtering done at ~100 mtorr 23:22 Its probably less efficient, slower deposition, etc 23:23 But for DIY the first rule is "make it work" 23:23 not "make it cost effective for mass production" 23:23 well, even in research labs when mass production isn't a priority, all sputtering i've heard of is done with first high vacuum then letting a little bit of noble gas in 23:24 Yeah 23:24 I'm not sure why 23:24 I'm asking myself the same question. 23:24 But RF sputtering is normally done at much lower (1-2 mtorr) pressures 23:24 i'll be doing DC 23:24 but no one has been able to answer it yet 23:24 Yep, one more item on the todo list 23:25 I want to set up some kind of proper website for coordinating this, now that i have people interested from all over the place 23:25 right now i'm the main guy pushing the research, i'm bouncing ideas off of two friends who live near me 23:26 and there are a bunch of folks i know online who i talk to about it here and there 23:26 But there's no central location for posting status reports etc 23:26 Any recommendations on some kind of web-based tool that will work well for it? 23:27 maybe for starters, just a mailing list with public archives? 23:27 I set up the group "homecmos" on google groups but there's been zero traffic so far lol 23:28 i havent tried using it much 23:28 personally I don't really like google groups... good old mailman is best 23:28 Want to host the list somewhere? Be my guest 23:29 I can probably create you a mailman list on lists.milkymist.org 23:29 if you want... 23:29 that might work... right now i'm still trying to figure out what kind of web presence to have 23:31 right now its just static html hosted from my office box lol 23:31 any wiki hosts to recommend? 23:31 otherwise I think sourceforge also provides mailing lists 23:32 wiki... hmm... actually, no 23:32 I use mediawiki and it's awful because of spam problems 23:32 it would not even let you mass delete accounts or edits and comes with no captcha by default 23:32 As a minimum I want a wiki (posting restricted to registered users probably) and a mailing list 23:32 so a default mediawiki installation is unusable because it gets daily vandalized by bots and you spend hours fixing it 23:33 Yeah, i run default mediawiki for one project but its internal and on a LAN-only server 23:33 behind a firewall 23:33 there's also github which provides a wiki 23:33 grrrr git 23:33 no 23:33 the nice thing is that the wiki is backed by a git repository 23:33 Action: azonenberg prefers svn 23:33 huh? why? 23:34 svn is slower and more unstable than git 23:34 Never liked distributed vcs in general 23:34 well you can forget about the distributed features if you don't need them 23:34 i'm a big fan of continuous integration so i want everyone committing to trunk so the code gets as many eyes on it as possible early on 23:34 git seems to encourage branching to an extent i dislike 23:34 that is possible with git as well 23:34 but i dont want to start any religious wars lol 23:34 well, personally when I switched from svn to git I don't understand how I have endured svn that long 23:35 corrupt repositories (both on client on server), slowness, bugs, segfaults, crashes, etc. 23:36 I do not use the distributed features of git a lot either (though being able to commit while offline is nice), and use it mostly for its speed and robustless 23:36 lol i've never seen any of those, but w/e 23:37 robustness 23:37 Right now i have an svn repo but its pretty empty, migrating wouldnt be hard 23:37 lekernel: never has stability issues with svn. but i agree on the slowness. once you get used to the speed of git, svn becomes quite unbearable 23:37 s/has/had/ 23:37 I want the wiki and mailing list first, vcs can be hosted wherever 23:37 thoughts on google code? They support VCS backed wikis 23:38 wpwrak: well you can try to grab the milkymist tree and commit it in one go to a svn repository. there's a good chance this will fail. 23:38 with git no problem 23:38 lekernel: hehe, i'll pass :) but we used svn quite extensively at openmoko for many years and i don't remember any stability issues. we actually had more trouble with git :) 23:39 So I think i'm going to go google on this 23:41 i already have the group so i'll google-code the wiki 23:41 if you have a good wiki engine to recommend (mediawiki isn't) I can also host it for you 23:42 lekernel: I dont, unfortunately 23:43 nice thing about google code is that the wiki is VCS backed 23:43 So you can even send out commit emails on wiki changes etc 23:43 but I don't want to have more mediawiki problems. one wiki is already enough to get me pissed. 23:43 Yeah lol 23:44 lekernel: to paraphrase a joke i once heard about IBM: mediawiki is not a necessary evil. mediawiki is not necessary. 23:45 thoughts about pmwiki? 23:47 lekernel: Never heard of it, i think i'll run with google for a while and see how it works 23:48 azonenberg: btw, i agree that vcs-based makes a lot of sense. particularly if you also have an offline renderer/formatter such that you can edit your pages locally and just commit 23:49 btw use of mm w/ video input and camera: http://www.vimeo.com/22966103 23:53 gn8 23:53 lekernel: (video) nice ! 23:58 --- Thu Apr 28 2011 00:00

Generated by irclog2html.py 2.9.2 by Marius Gedminas - find it at mg.pov.lt!