Saturday, June 06, 2015

CoffeeScript IIFEs

Immediately-Invoked Function Expressions are an easy way to hide your variables, preventing collisions with your code or the code of others. In JavaScript you'll see the following a lot:

In CoffeeScript the syntax can be the same; enclose the function in parens, and put parens after it to call it. It's ugly, though:

CoffeeScript has the do keyword, though, and it provides a cleaner way to do this:

If you're passing an argument the ugly way you'd put the closing paren at the same level as the IIFE's code block; this looks a little wonky to me, and the trailing paren looks lonely (odd for a Lisp person to say, huh?

The do version requires the variable to be set at the point of the do. I think I still prefer this method, but it reads a little funny to me:

Thursday, April 30, 2015

Easy Halloween Sound Hack, Part One

My aunt and uncle set up a "Haunted Forest" every year, where they take area kids through a section of their woods with groomed trails and various scary things. This year I'm helping "up the ante" with sound, lighting, and robotics (pneumatic and/or otherwise depending on time). I'll cover various means of tech-ing up the fear.

The first hack is pretty straight-forward, the only possible glitch being sound file format conversion (the board I'm using doesn't support MP3) to OGG. WAV is supported (and desirable under some conditions) but the storage space is somewhat limited, so I'm using mostly OGG except for when sounds are being looped or played sequentially. The compressed format introduces a small, but perceptible, delay under those conditions.

My test rig was designed to be cheap and mutable, so I used a mono amplifier (Adafruit ID 2130) and the mini sound trigger board (Adafruit ID 2342, 2M flash storage), about $19 altogether. There's an integrated version (2M flash, stereo amp) that I might consider, it's about $25. I'm anticipating we'll use anywhere from 2-6 of these across the forest, with a variety of triggering mechanisms.

Breadboarded prototype; cheap version

The Adafruit docs detail how the device works. Nutshell: sounds are triggered via either the board's GPIO pins (pull to ground) or serial port. Only one of these mechanisms can be used at a time. (I have another project using serial control; this will be documented later.)

For the haunted forest (and my own display) we'll need several triggering options, including PIR, simple switches, and possibly audio. Trigger pins will be externally exposed via a simple jack, probably 1/8" audio. Speakers will be external with an option to stick it onto the project box.

One obvious usage is to hang the box from a tree with the PIR firing down, or towards the path. For extra giggles you could either point it so it fires after the victims have passed so the sound comes from behind them. With only minor effort, and separated stereo speakers, you could set up a sequential trigger that first comes from behind, then in front of, the victims, etc.

For my own Halloween display I'll be using two setups, one with a PIR for people approaching the house, and another at the house for when they dip into the candy bowl. With a 4 Ohm speaker the sound is more than enough to raise a hackle or two, although if you need to project over a distance you'll want something more powerful than the 2.5W amps shown here or on the integrated device.

Upcoming posts will detail the lighting (fire and lightning effects, other simple things), any robotics, and the integration of all these systems into a unified pee-inducing system.

Sunday, April 26, 2015

Electric Imp + OLED + ... Squirrel?

The Electric Imp came out before the days of essentially-free ESP WiFi modules. It was designed to be embedded into devices, provide a WiFi interface, and some basic cloud connectivity. It's a bit of an odd duck: it's initialized by blinking lights, e.g., seizure-inducing screen flashes from your phone. This is a pretty unique way to get things set up, and it works great.

I had a few of these devices sitting on a shelf and recently ran across someone asking for help connecting them to a small color OLED with an attached SD slot for image storage. While I waited for my OLED to arrive I decided to hook it up to one I had on hand, a 128 x 64 monochrome OLED, hot on the heels of hooking up a different 128 x 64 monochrome OLED to an Arduino as part of another project (project blog(s) coming soon pending blogging platform change).

Not quite there yet; image buffer not written to OLED.
Both the monochrome OLEDs are based on the SSD1306 display driver. I wired the display to the Imp using SPI, but as the project requirements morphed, I realized the Imp dev board the project used, the April, didn't have the IO necessary to use SPI since we still wanted user input under Imp control. The next iteration will hook up the display using I2C. The caveat is that the OLED that'll actually be used, while I2Cable, will require a fair amount more effort to ensure access to both devices on the board, the OLED, and the SD card.

The final SSD1306 library combines to existing code bases: the first an Imp-specific Squirrel-based class optimized for image display under I2C (e.g., not pixel-oriented line drawing), the second the Adafruit SSD1306 library with line primitives supporting both I2C and SPI.

The end result is a new Electric Imp SSD1306 I2C library. I may continue development to support both I2C and SPI; the SPI data writes are faster, but exhausts the resources of the April board (an "imp001 device"). Since this is a bespoke project I decided to keep the original library small, focused, targeted at the project's exact needs, and over-documented. I will likely robustify this effort into a general-purpose Imp SSD1306 library.
Now we're cooking with gas: I can haz pixels!
The images above are with the original SPI wiring. The next episode will include a more in-depth writeup discussing the library itself, the I2C wiring with additional input (and output?) devices, a link to the library, and will include how to access the Imp from a phone and do something with the attached devices.

Friday, April 19, 2013

Remapping a Control key to Windows / Super Under Ubuntu 12.04

My old ThinkPad keyboard rocks: it has a trackpad and a trackpointer, types nicely, has a palm rest, and is generally awesome. It does not, however, have a Windows button: this makes using it under... well, anything... difficult.

I'm currently developing with a company-bought Ubuntu laptop after having used OS X for the last three years almost exclusively. Like Windows and OS X, it pretty much demands the use of a Super key for accessing OS functionality and popping up utilities and applications.

I used two tools to do the remapping, xev and xmodmap. The lower-left control key is key code 0x25 (37). Ultimately it ended up being simple, and my .Xmodmap looks like this:

clear Control
keycode 37 = Super_L
add Control = Control_L Control_R

I also added the following to my .xinitrc, but it may be redundant:

xmodmap .Xmodmap

It's more awkward than a proper Super key, but it's workable, and I'm typing happier.

Tuesday, March 12, 2013

sftp "Received message too long" on OS X

Today I started receiving the following error when I tried to sftp to my localhost, both from the command line and from the Ruby Net:SFTP library:

$ sftp ftpuser@localhost
Received message too long 1399157792

Trivial digging revealed that ftpuser's .bashrc script was writing to stdout, which apparently is enough to confuse sftp all 'round. I modified the command it was running to redirect stdout to /dev/null, and the problem was resolved.

Tied to the Web Layer

Struts 2 claims that "actions can be POJOs". Developers find out pretty quickly that not extending ActionSupport means you lose some Struts 2 functionality (primarily I18N and validation).

One source of confusion is what "POJO" means. POJOs don't mean you don't extend a base class. POJOs are classes not directly tied to unrelated libraries, specifications, etc. For example, Struts 1 actions were directly coupled to the Servlet specification and Struts 1: S1 action methods had signatures including things like HttpServletRequest and ActionForm.

I think of S2 actions as the interface between the client (browser, REST consumer, etc.) and the stuff that actually gets stuff done. S2 handles validation, type conversion, flow (or at least conversion of business-level flow into web-app flow), etc.

Heavy lifting happens outside of anything related to my web layer: persistence, logic, and calculations happen in services, utilities, models, and glue. How is it relevant that my web layer actions are tied to their web layer? What would be the cost of changing web layers?

Web layers all have their own ideas about how to interface to clients. Some use annotations. Some use XML. Some use conventions. They do validations differently. They handle flow differently. They handle form parameters differently. No matter what, the layer between the client and my business logic is going to change, radically or not, if I port to a new web framework.

That my actions extend ActionSupport isn't going to be the pain point: the request handlers are going to change no matter what. How I expose validation errors to the view will change. How I retrieve form parameters will change. How I define validation will change. How I do I18N will change. How I code the view layer itself will change.

That's not to say there aren't (or shouldn't be) unified ways to do all those things, but at the moment, there isn't a single standard approach (and maybe there shouldn't be, although a "web AST" would be cool). The trick is to minimize the coupling between the client and the application's guts.

- Work in progress -