MonthNovember 2015

Know when to pee with the ATtiny! (badum tss)

This project was born from a very common problem: our office’s floor is equipped with one bathroom that can’t be seen from every desk. Ensues unnecessary back and forth when someone walks to the bathroom only to find it occupied.

The solution ? A simple bathroom monitoring system composed of two devices:

  • An emitter placed on the inside of the door of the bathroom to monitor. An infrared sensor is directed toward the lock’s knob. A piece of black tape is applied on the knob. The idea is the following: when the door is unlocked, the knob’s metallic surface is facing the sensor and reflecting a fair amount of IR light. When the door is locked, the taped part of the knob is now facing the sensor. Since the tape is black, the amount of IR light reflected decreases: we know that the door is locked. We use a radio emitter module to send the value read from the sensor to the receiver. A small Atmel microcontroller (the ATtiny85) acts as the brain of the system. The device runs on 4 AAA batteries and is put to sleep 5 seconds every time a reading is sent in order to save power.
  • The receiver is used to display the status of occupancy of the bathroom remotely. It is built around the same microcontroller as the emitter. A RF receiver picks up the readings from the emitter. Depending on the value received, we light up an RGB LED in green or red. This device runs on a wall wart since the LED is constantly turned on.

Continue reading

My first hackathon as a mentor: WHNYC 2015

Truth be told, the first Hackathon I assisted to happened only a month ago in Montreal. It was my fist contact with such an event, and a great experience overall. It’s not often that you find yourself surround with 400 people that share a common interest about technologies and hacking as in making things work together in order to make cool ideas come to life.

When WearHacks asked me if I wanted to join them for a second Hackathon in New York, this time as a mentor, I didn’t have to think for long before giving my answer. After quite a long trip that led us from Montreal trough Ottawa and Toronto, we finally arrived in Brooklyn and at New York University.
Continue reading

Volumetric display and data visualization : 4 animations for the L3DCube

The L3DCube

The cube falls into the category of volumetric displays, meaning that it can be used to represent 3 dimensional shapes. It is composed of 8*8*8 512 RGB LEDs, namely the very popular WS2812 that you can find in Adafruit’s neopixel product line.

It is sold by a company called Looking glass factory. It is still a product at the kickstarter stage. You can read about the story of its development on this instructable.

They make an 8*8*8 and a 16*16*16 version. The small version will set you back 399$, not something that I can afford. I was able to play for a while with the one from WearHacks. The solution for you can be even more rewarding: build your own! You can start by having a look at these Instructables.
Continue reading

L3D Cube visualizations Part 1: real-time scatter plot with Thingspeak

Overview

Basically a demonstration of the plotting capabilities of the cube. We will retrieve some data from a public thingspeak channel (they correspond to
data points posted by a connected barometer installed in my living room).

The JSON returned by the Thingspeak API is parsed on processing and displayed on the cube. Each serie of data is represented by a 2 voxels thick scatter plot.

The client code on the photon is a variation of the main client: we use the accelerometer data to give the ability to change the plot displayed on the front frames of the cube by tilting it one way or another.

Continue reading

L3D Cube visualizations Part 2: real-time worldwide weather

Overview

We will make use of the OpenWeatherMap API to retrieve the temperature from cities around the world and displays the result on the cube. The result is a “real-time” (actually the free API key only gives access to hourly updates) visualization of the earth’s weather.

A Python script is used to select which cities are displayed: we start with a json file provided by OpenWeatherMap that contains every city accessible from the API as well as their ID and coordinates. The json is parsed and casted as a panda dataframe. The latitude and longitude of each city are transformed in voxel coordinates over a sphere of 4 voxel radius. The cities that fall on the same voxel are grouped and a random one is picked up from each group to represent that voxel.

The result is saved in a csv file that is loaded in Processing and used to query the API. The temperatures of each city are then shown on the cube using a gradient of color.

Continue reading

L3D Cube visualizations Part 3: webcam stream projection

Overview

The video stream’s frames are divided into 8×8 squares of equal surface. The average RGB values of every pixel in the square is extracted and used to recompose a smaller image. The image is then projected on to the cube.

With enable3d option set to true, the past 7 frames are stored and displayed on the back frames of the cube with a delay set by the variable updateFrameRate.

The webcam stream could easily be substituted for any video stream if need be.

Continue reading

L3D Cube visualizations Part 4: depth and color projection with the Kinect

Overview

The Kinect is a traditional camera doubled with an Infra Red Camera, enabling it to perceive depth in addition to the color information.

Originally sold for Xbox, it is now available for PCs under the brand Kinect for Windows. Don’t let the name fool you, it will work just as well on OSX or Linux.

Similarly to what we did for the webcam, we connect to the video stream of the Kinect, analyze each frame and downsize them so that they can be displayed on the cube’s 8*8 resolution.

But this time, we will also extract the depth information that the Kinect returns along with the color information of every pixel. In the same way that we averaged the RGB values to recompose a smaller output image, we will compute the average depth of each new pixel. This depth will be used to position the voxels on the z axis of the cube.

Continue reading

© 2016 DigitalJunky

Theme by Anders NorénUp ↑