Synchronicity of a Machine

Image for post
Image for post
Baths of Caracalla [3]

The bathhouse has a 150-foot tall ceiling — Kahn noted that we are capable of taking a bath in an eight-foot ceiling, yet “there is something about a 150-foot ceiling that makes a man a different kind of man”. This idea is not only about the scale of space, it points to that we are capable of living in these metaphorical eight-foot spaces, but why not push the limits and live in a 150-foot space — and could we design for this condition? Architectural design can be more than an eight-foot ceiling. Design can be the magnificent Baths of Caracalla or the surreal feeling of grandeur you would get in the pit of your heart entering St. Peters Basilica. The feeling that Louis Kahn is pointing at is not the religious subtext, but the sheer difference between the physical aspects of these spaces and they can mentally shape the way we feel, in effect challenging architecture to take design a step further, a step towards spaces that foster the imagination and wonder.

Studies have been conducted on classrooms, specifically examining test scores relative to a classroom’s physical environment. In the article “A Holistic, Multi-Level Analysis Identifying the Impact of Classroom Design on Pupils’ Learning” collected data on 751 students from 34 classrooms across seven different schools in the United Kingdom. The study examined if components of the physical environment affected students learning outcomes. This study was completed by taking samples and measurements of air quality, day and afternoon thermal readings, and making a note of the overall classroom’s physical and acoustic structure along with subjective survey data from students and teachers. [5]

The study concluded that six factors could have a negative effect on students which are: color, choice, connection, complexity, flexibility, and light. These factors were shown to have a 73% linkage to student performance and learning outcomes. The physical environment does not only limit where we can physically move but possibly where we can cognitively move as well. Another study took place in central New York within a multiple story building that had a train track adjacency. One of the classrooms on the second story which faced the train tracks had a significant rattle and loud grind of metal ringing through the classroom multiple times throughout the day. The students in this second story classroom had lower test scores than the students who did not directly face the train tracks and had lower ambient noise pollution. This study is by no means conclusive — merely corroborating strong existing evidence that our built environment directly affects us. Sarah Goldhagen, in her work, Welcome to Your World, How the Built Environment Shapes our Lives, agreed with Kahn. [4][5]

We could have classrooms facing train tracks, and students could overcome the almost visceral noise, but should we have to? If we are to believe that the built environment can shape the way we study, then it could also be suggested that our built environment shapes the way we think. Goldhagen argues from these studies that we can be inspired by the spaces we study within, but we can also be limited. Goldhagen cites overall classrooms with higher ceilings with increased natural daylighting had higher test scores; compared to classrooms with lower ceilings and less natural light penetration. [4][5]

We create internal maps of the space around us — this can occur even when we first walk into a new space. Just like our bodies keep an internal clock and regulate breathing without conscious thoughts, our minds also maintain an internal spatial mapping of our surroundings. This idea of internal GPS was researched by Kevin Lynch, who concluded that we define spaces by landmarks, edges, paths, and nodes. These spatial points allow us to navigate spaces and cognitively understand where we are in relation to the environment around us. A landmark could be the refrigerator in your house, signifying that you are in the kitchen as opposed to the bathroom, which is marked by a shower. These conditions that inform our geospatial mapping are not merely physical, but also cognitive. Lynch suggested that we create formal links to these physical components of our spaces, which allows us to feel comfort and warmth within a space. This points toward shared intimacy with our spaces through subconscious links that inform the way we mentally traverse the spatial voids. [4][5]

Image for post
Image for post
Nest: Image by Evelyn Chong from Pexels [7]

Gaston Bachelard, in his work The Poetics of Space, clarified that we navigate spaces by these real points creating what Bachelard called nests. Imagine yourself walking through an airport terminal, looking for an open spot to sit down before your flight. When you find your ideal spot, you set your bag down on the floor, or perhaps on the open chair adjacent to you and then your phone to the right and possibly your current novel to the left. What you have done is created a comfortable space for yourself where you have positioned your belongings in a spatial realm relative to your own body. This behavior is what Bachelard calls nesting. The individual has created their very own personal nest in a temporary space, which allows the individual to feel some sense of spatial comfort. This internal mapping of our space gives us mental comfort, allowing us to discern where we are relative to the rest of the world. Bachelard pointed out that these nests are held firm to our sense of comfortability and inform our spatial decisions. If we form strong connections to the spaces we live in, and these spaces have a direct impact on the way we think and learn, then could an architectural design team be directly influenced by the physical space they inhabit when they make spatial decisions? [1]

In Rsbakker’s essay, Artificial Intelligence as Socio-Cognitive Pollution, one question revolves around how prone we are to repeated behavior given similar input. Rsbakker puts forth that we instinctively map specific actions to a given set of inputs, and these actions turn into our patterns of behavior in a myriad of situations. The gate of our stride as we walk down the street, the tools we use when building a table, or the verbal response when ordering a cup of coffee. These actions are repeated patterns of behavior that are influenced by past experiences. [8]

“Screw problems require screwdriver solutions, so perhaps screw-like problems require screwdriver-like solutions. This reliance on analogy provides us a different, and as I hope to show, a more nuanced way to pose the potential problems of AI. We can even map several different possibilities in the crude terms of our tool metaphor. It could be, for instance, we do not possess the tools we need, that the problem resembles nothing our species has encountered before.” [8]

Here Rsbakker discusses our habit of using a screwdriver to tackle anything that resembles a screw-like problem. Not to say that this is inappropriate behavior, but pointing to the feedback loop created by this pattern. When we make a morning cup of coffee, we put the coffee in an insulated ceramic mug as opposed to a glass cup. This pattern created by logic as pouring the hot coffee in the glass and picking it up would feel incredibly hot to the touch. We have past evidence that the hot liquid would be better suited in the ceramic mug, so we go ahead and pour the coffee and move on with our day without giving further thought to the matter. What if the same amount of thought we give to reaching for the screwdriver when confronted with a screw-like problem is the same we give to our architectural designs? We are confronted with a specific type of design problem, and we reach for the screwdriver to solve it. This feedback loop could be inadvertently perpetuating design that feeds upon itself — with no new degrees of freedom introduced in the design process. [8]

The average person within an urban environment spends upwards of 90% of their day in a building. These buildings can impact the way we learn, and we are a species that could be caught in a constant feedback loop, then how do we introduce new degrees of freedom and break the cycle? Rsbakker pointed to this dilemma concerning the prospect of humanity, creating sentient artificial life. Posturing that we as a species are in a constant feedback loop taking information from the past and applying it to current problems. This paradigm does not mean that we are not capable of certain degrees of freedom. However, it could mean that the magnitude of those degrees of freedom is limited. If an architect lived in a perpendicular square space their whole life, could they be predestined to create more square spaces? What if one method of introducing new degrees of freedom would be to begin allowing a mechanic mind to propose new notions of design into the process? This is to say that a mechanical mind would, by its very nature, think differently than us. The mechanic mind would not necessarily think of using a ceramic mug to pour the hot coffee into unless programmed to do so. Which would mean the person programming the mechanic mind gave it their direct experience to inform the actions of the mind. [5][8]

The mechanic mind in this metaphor is a machine programmed with a given set of parameters. However, the possible outputs generated by the mechanic mind’s experiences would not be determined by human touch. A traditional calculator would not be within this domain as every input is mapped one to one to a given output. One plus one will always output two, but this proposed mechanic mind would output various values and await input from a myriad of sources to determine an answer. This hive-mind approach is the same method a queen bee uses when selecting a mate to populate the ant colony. She mates with a multitude of bee suitors never initially knowing which suitor would be the ideal candidate. This framework of not predetermining an accepted answer is what a mechanic mind can provide to the human decision-making process. What if a mechanic mind could augment the way we design — effectively changing our internal mappings and allowing for new relationships to be created in our neatly nested built environments? These new relationships might initially be foreign to our mental and cognitive touch. However, there could be new possibilities of developing advanced approaches to changing an eight-foot space into a 150-foot one.

How Does a Machine Think

A machine solves problems using a single-threaded methodology based on its programming. Machines are rigorous and consistent by nature when we type one plus one into a calculator; the return historically has always been the same. Machines are capable of seeing the world differently than us — crafting solutions that may even surprise. The concept of a machine seeing the world differently is what drives this investigation. Can a machine be programmed to have a mechanic mind — that is programmed by our thought process but capable of augmenting the way we think about problems. Machines can be programmed to input a discrete number of variables that act as metaphorical levers that we can pull on and see how an output changes.

This manipulation could allow for a systematic approach to architectural design. In Greg Lynn’s Animate Form, Lynn discusses how, in a scientific experiment, we wish to control every single variable — effectively creating what Lynn describes as an instance. Imagine this instance as a ball rolling down a hill, and now imagine a computer program simulating a ball rolling down the same yet simulated computer hill. These two events could theoretically input the same variables, but both are distinctively different. In the simulation, a programmer would need to define values for the physics engine that would propel the ball down the hill. Then define how the ball reacts to this artificial gravity, controlling every single aspect of how this given ball rolls down this simulated hill. The outcome would be predefined based upon the programming with no degrees of freedom. Whereas if a person rolls a ball down the hill, many unforeseen variables could affect the outcome. Factors such as a dog running into the ball, or unexpected rocks along the ball’s path changing the ball’s intended course. The question then becomes, how do we introduce freedom into a machine, how could we get the ball to change course while rolling down the hill without programming it to do so? The marriage between user interaction and a mechanic mind might be able to introduce new degrees of freedom and add surprise to our design process. [6]

An Experiment

If we define sound as the compression of air molecules neatly ordered in harmonies and melodies, then it follows that we are capable of categorizing sound into genres of music. This categorization can occur within our minds. We hear a particular melody, and our cognitive mind begins flipping the pages of our memories to attempt to find an identical match. We do this based on similarities we remember between songs in a given genre, the drumbeat or familiar guitar rhythm would give us clues oh how to classify the new sound. Our mental feedback loop acting as our primary catalyst for decision making. We are capable of hearing and understanding the notes of a given sound. However, we are unable to see the physical properties of a sound wave, but a machine can be programmed to see this invisible wavelength. A machine programmed to input a sound and breakdown its physical characteristics could match sounds based on a different kind of mapping. The machine would create this mapping based on the vibrations of a sound rippling through the air as opposed to our human ear recognizing a melody or harmony. This extends our human capability into a new realm where there exist new possibilities and questions.

The example below is the Java application reading the input song and recording the amplitude and troughs of the given wavelength. The amplitude values are assigned an index based on input order processed by the runtime script. The visualization is the index’s (i) mapped on the x-axis with amplitude heights mapped to the y-axis.

This machine capable of analyzing input songs, breaking down the wavelengths into components and amplitudes will be the catalyst for the experiment.

Wavelength Breakdown

How amplitude is calculated

Image for post
Image for post

Analyzing Jar of Hearts by Twenty One Pilots

Java application recorded each wavelength of the input song and analyzed each amplitude coloring by frequency.

Example Values from Song Analysis

Image for post
Image for post

The machine placed in an art museum at the University of New Mexico will listen to the sounds of the museum. Upon user interaction with the installation, the machine will take the recorded sounds of the museum and breakdown its physical properties, specifically the sound waves amplitudes. The machine will then cross-reference the input sound from the museum against a song library and return the best match and play the given song for the user. This machine falls under the definition of Lynn’s instance as every parameter of the calculations has been programmed. However, the user’s response is the independent variable. Firstly connecting the user to their built environment in a new way and possibly changing their acoustic mappings to the museum space. What song would the machine return when no people are talking in the museum, and only the HVAC is saturating the ambient noise? We often think of this as static white noise, but by connecting this HVAC background noise to a recognizable song, it could create a new mapped connection for the user in the space. The machine may map HVAC static noise to the song September by Earth, Wind, and Fire, which could form new connections to the static noise of the art museum.

This is a thought experiment intended to surprise the users of the museum and to create new questions by enabling a machine to change the acoustic construct of the space. The installation expands the user’s engagement with the museum space while modifying the user’s realm of understanding. Users will have a glimpse of how a machine is capable of seeing a sound wave, and allowing the users to see it too augmenting the way we understand the specific sounds of the art museum.

What Does This Mean?

This machine placed within the UNM art museum is not yet a fully realized mechanic mind, as defined in this paper. This art installation can create new acoustic connections for users within the museum space, revealing a new realm of possibilities. The next step in this conceptual framework could be to test if designs created by users in the same built environment would change based on a mechanic mind’s input. The mechanic mind can take inputs and would show users a new realm of outputs. We can leverage machines to extend our knowledge and augment what we see as the realm of possibilities. If we find ourselves stuck in a feedback loop of behavior, repeating the same architectural patterns over and over again. Could we use machines to grow our realm of possibilities and break into a whole new dimension of design — can a machine surprise us?

I am pursuing my master’s degree in Urban Data Science and Informatics at New York University and working as an urban designer + web developer at KPF Architects

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store