Archives For author

One of my favorite project batteries is this 7.4v/4.4Ah Li-ion from BatterySpace. It’s basically four 3.7v batteries (approximately AA sized) packed in heavy shrink wrap.

IMG_2380

The original is the white and green on the left. I needed it to fit the form on the right so it will sit nicely in the project box for the animatronic tail. So I cut it open, and found a little more complexity than I expected.

The batteries are in two packs of 2 each, wired in parallel; the packs are then wired in series. They connect to a little PCB that adds charging and short protection.

The trick is that the batteries have foil tape for the contacts, and I couldn’t figure out how to solder to the foil. I made 3 copper plates with solder blobs for contacts. All 4 batteries connect on one end (two negative and two positive) with a square copper plate. The solder blobs on the plate press into the battery contacts, and I taped it tightly. I soldered a wire to the plate which connects to the common pad on the PCB.

On the positive and negative ends, I did something similar with two rectangular copper plates, one taped across the two positives, the other across the two negatives. Lots of tape prevents shorts, and now I have a battery that fits nicely in the box.

My 2013 macbook pro has started rebooting itself at random, so it’s time to move on. Probably I could get Apple to fix it (and I probably will at some point, these boxen last forever), but I need a lot more RAM anyway, I’d love a faster CPU, and it’s probably time for me to go back to gnu/linux.

Migrating to a new machine means a lot of file copying. In particular, I’ve got a Seagate 3TB, usb3 external disk I really like. I want to use the Seagate and its fancy housing with the new setup, but first I should archive it to another disk to be put away in case I want to dig it out someday (unlikely, but whatever).

This is harder than it might seem. There is about 1.8TB of data, and the drives each read/write at around 150MB/s. They’re on usb3 ports, which in theory have 5Gb/s of bandwidth, which is more than enough to support both drives working at capacity. I figured writing 1.8TB at 150MB/s is about 4 hours. Do not use a bus, even a usb3 bus, everything slows to a crawl.

Alas, the filesystem is complicated. I use the classic rsync-with-hard-links system for incremental backups (I don’t like TimeMachine which does essentially the same thing because it’s hard to read without using Apple’s GUI). This makes things a little tricky because all those hardlinks (literally tens of millions in each backup iteration, multiplied by several hundred daily backups) confuse Apple’s primitive version of unix.

Primitive? Apple? Why yes, let me tell you more.

The naive way to copy in unix is with the cp command. Unfortunately, Apple’s BSD-derived cp doesn’t handle hard links properly. Unlike gnu cp -a, Apple’s cp copies the file rather than copying the link. This would explode the data into petabytes.

The way that Apple recommends to copy entire filesystems is with ditto. This preserves Apple’s special metadata, though ultimately I didn’t want or need that. ditto this turns out to be one of the two best ways to accomplish this task.

The old-timey unix way is to use tar (note the link to gnu-tar, Apple tar is weaksauce), something like this:

    $ sudo gtar -cf - /Volumes/backup-buffer | lzop -1c > /Volumes/backup-buffer-frozen/mac-backup-20160126.tar.lzo 

Note the sudo, there’s a lot of broken permissions going on in there. That will need attention at some point. Note also that I’m using lzop to do a little bit of compression. This isn’t strictly necessary, but I thought that lzop could probably keep up with tar, and in tests, I was right. The time lzop needs to compress the data is less than the time tar needs to read it off the disk, so lzop doesn’t slow the process at all. For the record, the multithreaded compression options (`pigz` and `xz -T0`) are slower than single-threaded lzop, even when lzop is compressing more aggressively (I got to about -4 before lzop slowed down the writing). Impressive stuff, lzop.

throughput

Note that the data are being read at 219 MB/s and written at 67.3 MB/s. That’s the compression ratio working in our favor (more read, less written). Overall I found that lzop at the least compression was still reducing the output by about 30% (it’s winning on csv’s and sql dump files, and losing on photos, zip, gz, and bz2 files).  It won’t help the job finish any faster (all the data has to be read, after all), but when the bottleneck is reading the data, we’re in good shape.

Ultimately I don’t care about how much space it takes. What I want is a disk I can drop into an external SATA-to-usb3 housing and find a file that I may have otherwise lost. An uncompressed, unarchived result is probably best.  I could have used a double-ended tar:

    $ sudo gtar -cf - . | ( cd /destination/dir ; sudo gtar -xpvf - )

I noticed that tar slowed to a crawl when it hit directories that are all hardlinks. For example, each daily backup has maildirs with thousands of files, but they’re all hardlinks. When tar hit those dirs, it slowed to about 3MB/s. Apple’s ditto had the same slowdown, but it did, at last, finish, in 1247 minutes.

There are other approaches:

  • cat (or dd, which does essentially the same thing) would completely duplicate the disk, including the unused space. Since I’ve got 1.8TB on a 3TB disk, that means it will take almost twice as long as necessary. Furthermore, the resulting copy was not recognized. There are subtleties about whether one is copying the partition or the whole disk, and I couldn’t get it to work.
  • rsync, suggested by many. However, rsync crashed repeatedly with a “filename too long” error. The offending filename wasn’t very long (though the path was, but that shouldn’t be an issue), and anyway, rsync wrote the damn directories! This left me scratching my head.
  • Apple’s asr tool, made for copying disk images. Nope — it fails, complaining about problems with the directory structures. OSX’s disk utility can’t fix this filesystem, it just hangs (all those hard links, again), so we’re done here.
  • One I haven’t tested: first use find to make a list of the files we want, then split the list into pieces, feed the pieces to cpio in copy-pass mode into the destination. This sounds like a good idea, but would disk contention on the target slow it down too much?
  • One I haven’t tested: dar which seems like a good candidate to produce a usable, compressed archive. Nice to be able to extract a file from the archive without opening the whole archive.
  • One I haven’t tested: fast-archiver, which works natively in parallel.

IMG_2083

Slobberchops (possibly aka Moop Man) and I added the final touches to the Magic Mirror literally minutes before Biggus Dickus helped us box it in ridiculously heavy plywood and pack it on the truck for shipping to the playa. We weren’t sure it would work outside the lab. Would it break in shipping? Would a show-stopping bug emerge? It was all worth it, because it totally worked.

A couple of qualifications about the video: the shadow effects (constantly shifting shadows behind the diffuser added depth to the image) are only barely visible, but they were amazing when you stood in front of it. The camera couldn’t handle the colors well (it tends to blow out when the exposure shifts quickly), but in person, the colors were very smooth & creamy, with the dithering doing its job remarkably well.

Every night many people played with it. This was my first piece of big art on the playa, and I am still pretty amazed at how happy people were and how much love they gave us for it. Yay!

Note to self: 12v/100Ah batteries are Very, VERY Heavy to schlep, but solar is a wonderful way to get DC power. It Just Worked. We got ~400W/hour, and the batteries were charged every day by 10 or 11am. It definitely made me think about how much more blinky goodness we could wlre up.

I’ll write up the build details at some point; there are a lot of photos, and quite a few more videos with equally bad color. But meantime, let’s all dance with the lights.

The LEDs are really bright, which could be irritating. We need to see what the video feed looks like, but I’ve been thinking a bunch about how to spread the light from each pixel into a blurry blob. Or, even better, smear the light into a blob that somehow changes.

Earlier diffusers were either crappy looking or too heavy; I loved how the translucent rocks look, but they would be ridiculously heavy. Photo diffusion paper looks good, and so do sheets of HDPE cut out of shopping bags, but they’re flat and static.

What if we had HDPE bags crumpled on thin rods, like a trash skewer on a BBQ. With a diffuser on top, the bags would cast irregular and moving shadows on the top diffuser. Like this:

So my current idea is to drive say 10 rods, each with a stepper motor, with all the motors driven by a Pololu Maestro. We’ll probably control the Maestro with python (as in this example). An example of a full pytohn-on-Pi-Maestro-servo setup is here.

I don’t quite understand yet how to chain the motors to the rods. However, turning them slowly, and having them stop for long periods, might make the effect all the more interesting.

For the future: a python library to talk to the Pi’s GPIO pins is here.

My friend Wheat-Thin built a box for this version.

Wheat-Thin's box

I painted it white and taped six strands of 10 LEDs each to it.

Front of Mirror with LEDs

The signal and ground wires are soldered to the top of each strand, and 5v and ground to the bottom of each strip. The wiring is pretty simple.

Mirror wiringThe signal and ground wires from each strip go to the Fadecandy (FC, the board in the upper right on the white pad). I made a 6×2 wiring harness with female headers, and put an 8×2 male header on the FC for the connection. Power is supplied by the 8.4v NiCd batter on the bottom. All the grounds go to the ground bus on the black pad on the right.

Ground busThe bus is a piece of 1/2″ copper pipe smooshed flat with wires soldered to it. The pipe is secured to the box with 10 lbs double-sided tape on top of electrical tape which is on top of duct tape (electrical tape sticks to duct tape but not to wood; the duct tape sticks to wood).

Power runs up the left to an on/off switch, then to a 7A voltage regulator (on the white pad, center left). A voltage monitor is attached to the regulator’s Vin headers to keep track of the battery’s power. 5v comes out of the regulator to the power bus (on the red pad). The 5v and ground wires from the LED strands are attached to the buses. It kind of works.

A couple of notes: the diffuser is just a sheet of grocery bag HDPE I’ve taped over the LEDs. I’ve set the LEDs to use a max of 50% of their power, and they’re still much too bright.

Lots remains to be done: the camera needs to be attached, configured, and understood; the Raspberry Pi needs to be mounted and powered from the battery (not simple, needs another board); the software needs to do something comprehensible (maybe I can figure out the python stuff?); and we need to come up with a better diffuser than this piece of plastic.

However, lots has been accomplished. I’ve figured out to make buses out of copper (and that will handle the load) for the production version. I’m starting to get a better sense of what the diffusion needs to do. I understand a little more about the Fadecandy server configuration (the explanation at Adafruit is the best).

This year’s theme at Burning Man is “Carnival of Mirrors.” I thought: I can build a mirror. These are my first steps.

I was inspired first by the combination of two projects. First is Adafruit’s 1500 pixel blinky curtain. This massive number of pixels are made manageable by the Fadecandy board. From the inventor’s page (linked left): “The Fadecandy Controller hardware drives up to 512 LEDs, arranged as 8 logical strips of up to 64 LEDs each. It connects to a laptop, Raspberry Pi, or other embedded computer over USB.”

Image by Micah Elizabeth Scott

Image by Micah Elizabeth Scott

The next inspiration is a piece I’ve seen online (and lost) of a camera + Raspberry Pi + LED array mirror. Apparently there’s also one at the Exploratorium but I haven’t seen it.

The idea is to feed a video camera’s image to a big LED array (perhaps 50×30 pixels) so that the participant’s image is reflected in a highly pixelated, mediated version. Lots of diffusion, no visible pixels; this is the post-pixel era. The Fadecandy drives new-style Adafruit LED strips.

First, a brief review of blinky tech evolution. Once upon a time, LED strips had two leads, one for clock and one for data. The new style strips have only one lead for data. This is better, but that means that the first prototype is Arduino-based, not Fadecandy-based. I had a bunch of the old style strips around, so I cut them into 6-pixel lengths, and made this.

Mirror-proto1-1-1

All the usual pieces are there: a voltage regulator, a power bus (on the breadboard), an Arduino Mega, and lots of poxy soldering. On top I added a diffuser made of translucent rocks.

Ok, that’s the basic idea. I have a lot more work to do on the diffuser. Diffusion paper? Rag vellum? HDPE? Not solved yet.

The next step is to test the Fadecandy. This was way, way easier than I expected:

The Adafruit tutorial on Fadecandy, the Fadecandy server, and the Processing language make this essentially trivial. Next up: a 10×6 array connected to the Fadecandy, in a proper frame, integrating the camera and Pi, and better diffusers.

I’ve been to the playa in RVs many times, and it always sucks. The RV cabinets break, the bathroom smells horrible (even if you don’t use it, which you shouldn’t), the grey water is always overflowing, it uses a zillion gallons of gas, and it always costs way more than it should.

So last year we built hexayurts, and they rocked. We started with a little model.

IMG_0981

The idea is that the six-sided building is created entirely (or nearly so) from 4’x8′ fiberglas panels. It folds flat for shipping, then it folds out on the playa. I’ll just narrate our building process briefly here because this is all very well documented all over the net. In particular, we recommend Camp Danger’s howto videos.

First, prepare yourself for a lot of taping. You spend hours, even days, cutting panels and covering the exposed fiberglas with massive sheets of giant packing tape.

hexayurt-1-7564hexayurt-2-7567

Then the pieces get taped together. There are two kinds of taping: the permanent hinge taping, and the installation-specific taping. I’ll leave all the explanation to Camp Danger, but suffice to say here that you have to pay close attention.

hexayurt-4-7587

We added a plywood cover to the door side, and that turned out to be a big win. The plywood attaches to the door by bolts and big washers which can be seen in the picture below.

hexayurt-5-7593

Then we cut windows into two sides. One of the many advantages of hexayurts is that they pack flat. This helps SO MUCH.

IMG_1040

We were only on the playa for 5 nights, and it wasn’t all that dusty in 2014, so we didn’t test the yurt as intensely as possible (I don’t have any on-playa photos). However, it was spacious, cool, and easy to setup. I can’t imagine a better way to habitate. Highly recommended!

Like many, I freaked out this week when I read that Verizon admitted to throttling connections from Netflix.  But then Verizon denied the rumor. Maybe the best answer is to keep a close eye on download speeds from different points on the net.

I’ve just switched home ISPs from Sonic.net to Astound. I loved Sonic’s politics, and their customer service was great. However, the fastest I ever got from them was 18Mb/s down, and about 2Mb/s up. For the same price, Astound gets me much better speeds, and as far as I can tell, Astound’s politics aren’t horrible. They’re not AT&T, anyway.

Server d/l speed, Mb/s
AWS, East 67
AWS, CA 100
AWS, OR 90
Linode, NJ 61
Linode, GA 52
Linode, TX 63
Linode, CA 80

Speed is very closely connected to geography, and that seems reasonable.

I think there’s room for a daemon app that runs a few times per week against a huge database of sites. Basically crowdsource speeds from all over the net. The app should fire at random times, and grab a site or two randomly from the database, then get a speed check. It would then report the speed , the ISP, the protocol, and the IP address of the checker with a timestamp to a central database. This would provide a crowdsourced monitor of what ISPs are doing with our packets.

Taking clean water to the desert is necessary, but then what do you do with the gray water from showers, dishwashing, etc? You can put it in tanks and bring it home, but that’s heavy and messy. There are pump-out services, but I consider that cheating.

2013-08-31_evapotron-_5677

Evaporate! P and I designed these towers in early August, and with some help, I assembled them on the playa. These towers are made of 7′ deer fencing and rebar, guyed to the truck with climbing webbing. They sit in kiddie pools, and a 12v bilge pump in each pool drives water through a hose to the tower’s top and over a snow saucer. Power for the pumps came from 110v AC provided by Camp Soft Landing and 12v adapters I got from ebay. The water runs down the sides of the tower, soaking the burlap. Sun and wind pull the water off the burlap. Excess water runs back into the kiddie pool to make another round.

evap-2013-trucktopNote the handsome blinkies (75 12mm WS2801’s from Adafruit) on the guy lines. I used Funkboxing’s LED effects with a bit of tweaking. They looked great!

These systems took care of gray water for the Full Circle Tea House and about 150 BM participants. The average was 20-22 gallons per day per tower, for a total of about 205 gallons. That’s over 1600 lbs.

evap-2013This year worked a lot better than last year. One key finding is that the kiddie pools are huge, and with them the reservoirs on the bottom, it takes a lot less attention to keep the reservoirs full compared to last year when we used a 5gal bucket as the reservoir. The combination of full-time pumping and the taller tower led to a 30% increase in daily evap rates. Good stuff.

 

This video is an example of my questions on this thread at the Adafruit forum.