Image in modal.

Depending on the people you talk to, architects approach artificial intelligence (AI) with a range of anticipation, skepticism, or dread. Some say algorithms will handle drudge work and free designers to focus on the more creative aspects of their jobs. Others assert that AI won’t live up to its hype—at least not in the near future—and will make only marginal improvements in the profession. And a third group worries that software that learns on its own will put a lot of architects out of work.

Science fiction writers have been imagining robots that think like human beings for more than 100 years. But the field of artificial intelligence really began in the middle of the last century with British mathematician Alan Turing’s 1950 paper “Computing Machinery and Intelligence.” In 1956, at a conference hosted by Dartmouth College in New Hampshire, mathematician John McCarthy coined the term “artificial intelligence” and, with a group of participants, explored how to “make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves,” according to the event’s proposal.

Imbuing computers with true intelligence, though, has proved to be more difficult than originally imagined. Sixty-five years after the Dartmouth conference, computers can process huge amounts of information, analyze it to find correlations and patterns, and then make predictions based on those patterns. What makes AI different from previous forms of computation is Machine Learning (ML), which employs algorithms that get better at performing certain tasks the more they do them; they learn without having to be programmed to do each step. The bigger the data set used to “train” an algorithm, the better it will perform. By 1997, IBM had developed a chess-playing program called Deep Blue that was able to beat Gary Kasparov, the world chess champion at the time. Today, Google Translate does a pretty good job of recognizing text in one language and communicating it in another. A program known as GPT-3 will take a few word prompts and write a paragraph of text that seems at first glance to have been written by a person. Algorithms allow autonomous vehicles to navigate city streets, radiologists to identify cancerous tumors, and online shopping services to recommend products to their customers.

But computers still don’t think like people. They have no awareness of anything beyond their own predetermined capabilities and don’t have anything close to common sense. They know only what they have been shown and lack the ability to generalize from one task to another. A 2016 Obama Administration report on the future of AI identified the technology’s potential to “open up new markets and new opportunities for progress in critical areas such as health, education, and the environment,” but admitted that “it is very unlikely that machines will exhibit broadly applicable intelligence comparable to or exceeding that of humans in the next 20 years.”

In the past decade or so, software for architects has evolved from CAD to scripted geometry engines like Rhino and parametric BIM platforms like Revit—moving from the representation of buildings (in plan, section, elevation) to more responsive systems that show the impact of one change on the rest of the project. Thanks to faster and cheaper computers and the enormous computing power and storage capacity of the cloud, AI systems are now able to encode information and relationships in increasingly complex layers. Because they’re able to process vast amounts of data accessible from internet-based sources, they can create statistical correlations that approximate learning, says Phillip Bernstein, author of a forthcoming book on AI and an associate dean at the Yale School of Architecture.

Of course, most architects are nowhere close to doing any of that in their practices, instead using platforms like BIM to create drawings, rather than connecting them to data to create insight, notes Bernstein. Architects will be pushed to adopt AI tools, he says, mostly by clients who are already using large data sets, machine learning, and predictive simulation to manage their operations and facilities. Steve McConnell, managing partner at NBBJ, which has designed buildings for major tech companies like Amazon and Tencent, echoes this sentiment. By using data to drive their designs, architects can show the value of what they do to clients who run their own businesses with data-driven processes, points out McConnell.

McConnell says AI will also enable architects to “move upstream” in the building-development process by giving them the analytical tools to serve as “strategic partners” to clients—helping them identify business opportunities, for example, even before project planning has begun. Architects, though, “need to reimagine their skills and what they bring to the table.”

The promise of AI is not only that it will provide a business advantage, points out Bernstein. The technology has the potential to help architects address “profound problems,” such as reducing waste and embodied carbon in projects, making public spaces more equitable, and enhancing building performance, he says.

A few design firms are developing their own proprietary AI tools to help them work on big projects. Gensler, for example, rolled out its Nform “ecosystem” of data-driven software—much of which employs algorithms and AI—in the summer of 2020. Developed by the firm’s design technology studio, the in-house package of software addresses design issues at different scales—from floor plan to master plan to sustainability strategy. By harnessing algorithms to large data sets, the new software gives designers rapid feedback on the impact of changes to interior-space plans or building configurations, so design decisions can be made much faster and earlier in the process. “We want to augment the power of our designers and make them more agile,” says Marc Syp, Gensler’s director of computation.

Some firms have introduced their AI tools commercially. One such software application is cove.tool which optimizes designs based on multiple parameters, including daylighting, energy use, code compliance, and cost. Launched in 2017 by Atlanta-based architects Patrick Chopson and Sandeep Ahuja as an outgrowth of their sustainable-design consulting firm, Pattern R+D, cove.tool is now their primary focus. The cloud-based app uses machine learning to process data collected from many different sources—such as construction-cost-data company RSMeans, building-product manufacturers, public databases, and the app’s users themselves—to model energy use versus cost at every stage of design. It is intended as a holistic tool, obviating the need to employ an array of stand-alone software programs to analyze designs for daylighting, shadows, HVAC, and such, says Ahuja. The company plans to add electrical, plumbing, and structural analysis to the tool soon. “The idea is to give architects all of the data they need at every step along the design process,” says Ahuja.

Although not a design firm, Sidewalk Labs, which is owned by Google’s parent company, Alphabet, has developed a tool called Delve that integrates financial parameters, energy models, and site constraints to help architects and planners design complex urban projects. It depends on generative design and AI to analyze factors such as density, daylight, and walkability to propose a range of options and show the trade-offs inherent in each one. The software is mostly used during the planning, feasibility, and entitlement stages of a project to demonstrate how various schemes will meet city requirements and the client’s bottom line, says Violet Whitney, director of product management for Delve.

Artificial intelligence offers the potential for creating environments that are more responsive to their users’ needs at all scales, including the urban scale. Cities have been collecting vast amounts of data, and companies like Google have been providing functions like Street View and Earth for years now. AI can harness all this information, analyze it, and use it to help make places work better. Such data are the raw material that urbanists like Kevin Lynch and William H. Whyte had to generate periodically and painstakingly to develop their ideas; now these data are continually updated and available in real time, says Carlo Ratti, who directs MIT’s Senseable City Lab and practices architecture. His firm, Carlo Ratti Associati, is using such data to map Pristina, the capital of Kosovo, to better understand the city’s public spaces for Manifesta 14, the European cultural biennale that will take place there in 2022. Ratti’s team is using algorithms to reveal hidden spatial and social patterns and identify key squares, streets, parks, and green areas that are either underused or misused. This past summer, Ratti applied this information to initiate a series of temporary urban interventions that offer new ways of using and reclaiming these spaces. The third phase of the project will track residents as they “vote with their feet” and show how the reconfigured spaces actually function. This feedback will then help identify which interventions—such as converting the area around an old brick factory into an “urban living room” and turning space for cars into places for people—are retained for the future development of the city. “AI can help us understand visual clues that might not be apparent using older tools,” says Ratti.

Pristina Living Room.

In Pristina, Kosovo, Carlo Ratti Associati used AI to better understand the city and then design a series of temporary interventions, including a community “living room” on a vacant lot. Photo © Atdhe Mulla, click to enlarge.

Ratti cautions that the recent explosion in data collection comes with very real concerns. “Ninety percent of all the data on the planet have been created in the last two years,” he says. Cities need to be transparent about how they collect, store, and use data. They also need to wrestle with privacy issues and prevent—or at least identify—biases that might be embedded in the information they use.

Ratti sees AI’s being used to “turn buildings into living things,” pointing to facades in particular. With sensors that collect information on humidity, temperature, and air quality, envelopes can respond in real time to enhance comfort, reduce energy use, and maximize efficiency. “Building facades today are corsets,” he says, “but we can make them living skins.”

One firm that has been exploring AI as a tool for devising adaptive facades is Foster + Partners. Working with Autodesk, Foster’s applied research-and-development group has been investigating self-deforming materials that can change their shape without any mechanical forces. Instead of employing motorized louvers or other such devices, these materials respond to environmental conditions in the same way an eye’s iris does to light. They do this by combining thermo-active materials with passive laminates (multilayered materials, usually plastic) and exploiting the difference in expansion and contraction rates to change the shape of the facade.

Because there’s a nonlinear relationship between the laminates’ internal forces and their behavior, designing the material to work in a desired way is remarkably complex. So Foster used machine learning to build what is known as a “surrogate model” to study all of the interactions and how the various layers would react to changing conditions. Instead of repeatedly adjusting the arrangement of laminates to get the desired deformed state, the designers started with the preferred end state and let AI figure out how to get there.

Traditional Design Workflow.

To devise adaptive facades with self-deforming materials, Foster + Partners reversed the typical workflow, starting with the desired outcome and then allowing AI to figure out how to achieve it. Image courtesy Foster + Partners/Autodesk

In addition to such surrogate-modeling work, Foster is also exploring a more advanced form of machine learning that it calls “design-assistance” modeling, says Martha Tsigkari, a partner at Foster. The goal of this kind of modeling is to facilitate architectural processes that do not have definitive answers —those that require subjective approaches—and “work alongside the intuition of designers in the creative process.” The firm is trying to understand the potential of AI at different stages—from design to construction to building operation, says Tsigkari. Ideally, AI would create a continuous information loop, so feedback from the operation of a completed building would help architects with the design and construction of their next one.

ALT TEXT.

Visitors to the Smithsonian’s Futures exhibition can speak into Reddymade’s me+you installation and then have an AI-driven system translate their voices into color and light. Photo courtesy Reddymade

AI, though, is not just for big firms and big projects. Suchi Reddy, whose 16-person New York–based practice, Reddymade, combines art and architecture, worked with Amazon Web Services (AWS) for two years to develop AI technologies for a kinetic light installation in the 90-foot-high central rotunda of the Smithsonian’s Arts and Industries Building in Washington, D.C. Called me + you, the installation was commissioned for Futures, an exhibition that opened in November in the Smithsonian’s original home, which had been closed since 2004 due to structural concerns. Visitors can speak into nine circular “listening stations” at the base of the piece and tell their “future visions.” An AI-driven system then translates the meaning and tone of the spoken words into a kinetic “mandala” of color and light in the piece’s central totem. Each person’s sentiment changes the pattern and color of the totem, creating a constantly changing collective vision of the future. A web app allows people from around the world to add their voices to the sculpture, providing readings of the global “temperature” of sentiments on the future at any given moment.

Reddy says her work uses “emotional AI” that blends physics, neuroscience, and data technology. “I want to integrate feelings with technology to help us engage on a human level,” she says.

Another architect using AI to explore the relationship between the built environment and psychology is Mona Ghandi, who runs Morphogenesis Lab, a cross-disciplinary program at Washington State University (WSU) that includes students in architecture, neuroscience, computer science, and materials science. With an interest in “compassionate spaces,” Ghandi and her Morphogenesis team created an AI-driven installation called Wisteria that responds to the emotions of people interacting with it. Exhibited at WSU’s Pullman campus from February to August 2020, Wisteria comprised a “forest” of cylindrical fabric “shrouds” suspended from the ceiling that changed shape and color depending on biometric data collected from the people moving underneath it. By weaving into the shrouds a shape-memory alloy programmed to respond to readings of visitors’ body temperature and pulse, the Morphogenesis team enabled the fabric cylinders to move and activate LEDs that change color. The work is “contingent on user involvement and engagement,” says Ghandi, and “illustrates the collective emotion” of the visitors at any particular moment.

Ghandi sees Wisteria as a first step in developing spaces that can respond to the needs of people with neurological and emotional issues, such as autism and post-traumatic stress disorder, or to improve the cognitive performance of children in school.

While AI is still in its infancy, architects are taking it in a wide range of directions—some of which may prove to be dead ends and others more successful. What’s clear, though, is that architects must get out in front of the technology or get run over by it.

 

Continuing Education

AIA logo

: To earn one AIA learning unit (LU), read the article above and Harnessing AI to Design Healthy, Sustainable, and Equitable Places, By Phillip Bernstein, Mark Greaves, Steve McConnell, and Clifford Pearson (PDF).

Then complete the quiz. Upon passing the test, you will receive a certificate of completion, and your credit will be automatically reported to the AIA. Additional information regarding credit-reporting and continuing-education requirements can be found at continuingeducation.bnpmedia.com.

Learning Objectives

  1. Outline the history of AI.
  2. Describe the possible design-process advantages and efficiencies of AI.
  3. Describe ways AI is being deployed to create architectural components, buildings, and cities that are responsive to environmental conditions and users’ needs.
  4. Discuss the privacy and security concerns associated with the collection of vast amounts of data necessary to use AI as a design tool.

AIA/CES Course #K2112A