IaT // Interactive Thing

Interactive Thing (IaT) is an ongoing exploration of trainable interfaces for the future smart object. A switch that reacts to your personal sonic gestures, short sounds such as a snap or a whistle.

With an increasingly opaque network of "intelligent" things coexisting among us, even the most mundane objects can now start to observe, analyse and interpret our everyday behaviour. In a first step everything becomes a speaker with eyes and ears, but what will be the next step?

Starting with this question, IaT investigates how the same technologies that make way for the invasion of our private sphere can be used to propose non-intrusive, non-connected modes of interaction with the new electrical inhabitants. Machine learning enabling the user to train and adapt an objects' behaviour. Bridging the emerging complexities by affording active engagement in the object's learning trajectory, much like with teaching a dog.
// 2019
// individual
machine learning
research
exploration
storytelling

Age of computing

While from our experience as humans beings we are hardwired to expect linear progression, technological advancements have shown to be exponential. Stemming from Gordon Moore’s observation that the amount of transistors in a circuit doubles approximately every two years, this finding got manifested in the so called Moore’s Law.

For a more tangible example we could look at the progression rates of Google’s boardgame playing artificial intelligence AlphaZero. For a dedicated person , putting in a serious effort towards becoming a chess grandmaster it takes 8-10 years of learning to reach this status. The chess algorithm on the other hand took just 9 hours of training to reach a similar level. For the more complex and intuitive games of Shogi and Go AlphaZero took 12 hours and 13 days respectively. [*]
In our approach of humanising computers we could look at these new artificial instances as being in their children’s shoes. IBM’s artificial intelligence IBM Watson for instance is in his 8th year of existence since reaching it’s first acclaimed goal. Maybe same as with dog or cat years we should propose computer years; in an exponential scale this would then mean an age of 64 years?!

* https://deepmind.com/blog/article/alphazero-shedding-new-light-grand- games-chess-shogi-and-go

Future, present, past

“Homes have always been “connected” to the external world by means of infrastructure and a flow of inputs and outputs. Be it originally in 1700 BC, in the form of water; in the 1700s with gas for light and cooking; in the 1800s, in the form of electricity and telephonic communications; to radio waves and video in the 1920s and 30s; and ultimately data and information in the 1990s. Pipes, cables, lines, satellite dishes, and other infrastructure elements became part of the home to feed faucets, kitchens, washing machines, radios, phones, televisions, and to make the home electrified, automated, or smart.” [*]

“If data is the new oil, the home is the new Texas” [**]. The statement by Joseph Grima examplifies the implications technological advancements inevitably bring with them. In a literal sense the new oil rigs are sitting right in front of us in forms of Alexas, Siris, Cortanas and Google Homes or any other ‘smart’ device, thereby switching the direction in which the resources used to flow. From consumer to producer. The question is, are we prepared to willingly hand off the agency about these resources that are collected?

* Rebaudengo, Simone “Design for Living with Smart Products. The Intelligent Home” O’Reilly Media, Inc (2017)
** Space Caviar. “SQM, the Quantified Home” Lars Müller Publishers (2014)

Unsettling

Human communication relies on contextual information and the ability to change or adjust a response accordingly. We relate to each other through situational cues and often times temporary states. We reveal our intentions  in  a repeated game of question and answer, creating dependencies to one another.
In our aspiration of establishing technology in our homes however, we have grown accustomed to a unidirectional mode of communication with objects that, given their names consider themselves to be very human. In an ever growing quest of creating convenience these smart artifacts produce responses neglecting the chance of them being wrong or contextually inappropriate. Rather than communicating the probabilistic nature of its response, they live in a world of absolutes. At no point the device’s owner has a chance to actively intervene and get an insight into the machines intentions. Situationally placed information gets disguised as contextual awareness, backed by stream of data form our quantified selves. 
Future visions of the smart home show the intention of making this computational smartness become completly invisible. The mediation and monitoring essentially happens in the background, the ambiant. This dynamic raises further concerns about the agency over our privacy and the control of the interaction with our new digital inhabitants.

The graphic shows existing projects, interventions and relational objects that highlight alternative models of using "smartness" on objects. They are clustered along their expected output (Information, Interpretation, Expression) and positioned by the language they speak in (Computer: Code, QR-Code,...; Human: Voice, Text, Gesture)

Explorations

Over the course of the prototyping phase the intention was to replicate, modify and interpret existing projects to enable a hands-on learning approach. This process allowed for quick feasible insights about the choice of medium as well as limitations of hardware in the later project.

IaT

Interactive Thing utilises machine learning, a “field of study that gives computers the ability to learn without being explicitly programmed”*. More precisely, the process is called supervised learning. Similar to training a dog this entails the necessity for the user to first teach the object about the intended outcome. For a dog this might be a command like "sit", for a computer on the other hand this could be a certain type of image of other comparable form of data. This step, commonly referred to as training the machine learning model, simply put is, a way of translating your input from human into computer language. For the IaT this means recording audio and translating it into images.

* Samuel, Arthur L.. “Some Studies in Machine Learning Using the Game of Checkers.” IBM Journal of Research and Development 3 (1959): 210-229

How it works

The prototype consists of three parts. Three toggle switches connected to an Arduino, acting as in user inputs. One for background noise, another for the collection of trigger sounds and the last to start running the machine learning model. The switch states get transmitted to a Processing sketch that both transforms the audio inputs into machine readable images and communicates with Wekinator, an interactive machine learning platform. Once the user initiates one of the two input switches ,the raw data gets translated into a spectogram and a continuous stream of spectrogram-images is sent to Wekinator and assigned a label. Fed with data about the desired sound gesture and environmental noise, the model can now be run. During the process Wekinator compares the incoming audio stream to the previously labeled images and determines whether they match. In which case a signal gets sent to either turn the lamp on or off.

Work in progress

The interactive prototype of IaT was exhibited in the Work-in-progress-show at the RCA. In order to engage visitors in a more reflective conversation about their personal use and opinion on smart products, I created a flowchart for them to follow. Three strands of questions, challenging their expectations of a smart object and positioning them within a framework of six underlying personas from geek to privacy advocate.

What's next?!

... the end?
Not really. This project builds a starting point for me to further explore the possibilities and quirky applications of machine learning. This is where the fun begins ...