Across most industries, technologists and IT professionals recognize and agree that artificial intelligence arrives with the potential to drastically alter the way we work, learn, produce, and do business. However, some are concerned about the consequences of using generative AI tools in ways that are not clearly defined and controlled. Among other concerns, these fears include potential misuse of AI-based technology in the creation of pseudo-scientific material, or falsification of factual documents. While these worries are legitimate, they also point out a critical need to be clear in the language we use when discussing AI itself. Specifically, these conversations emphasize the need to distinguish key differences between generic tools like ChatGPT and technologies created for more focused, industry-specific purposes.
AI-based tools used in generative or iterative design capacities are an excellent example. Over the past few years, artificial intelligence technologies have been deployed extensively for use in product design, architecture and engineering (A&E), and industrial automation – revolutionizing the built world as we know it (albeit without the media splash we now see with ChatGPT). To provide one concrete example, Autodesk’s Fusion 360 has leveraged generative AI to enable unprecedented innovation – with shockingly widespread impact. By using generative AI to simplify and streamline the product development process itself, Fusion 360 has single-handedly revolutionized the way we design and manufacture products, unlocking new pathways for creation, and even automation, of complex design.
For the layman, what this means is that the output you receive from AI-driven tools (no matter how powerful, or intelligent) is limited by the quality of the dataset provided.
Data sets can be thought of as huge libraries. ChatGPT, for instance, references an extensive collection of text and written materials – a library of language equal to multiple years of online content – and uses that information to respond to a provided question, or instructional prompt. But increasing the size of the dataset does not always equate to improved quality. In fact, the wider the dataset, the more opportunities exist for return of inaccurate responses. A wide and generalized dataset provides widely generalized responses, many of which may be irrelevant to the questions posed.
Your local Wirtgen America dealer |
---|
Brandeis Machinery |
In contrast, generative AI tools that are trained with datasets of carefully selected information (relevant to their specific purpose) will deliver valuable outcomes with greater reliability. When a dataset contains information vetted for accuracy, relevance, and applicability, the responses provided are of much higher quality and result in accurate, relevant, and applicable solutions. The massive computing power offered by AI-based tools also allows iteration at speeds that surpass the capabilities of human processing, leading to the discovery of insights and solutions of surprising creativity and utility.
In the same way that a researcher focused on solving a complex physics problem would benefit more from access to a university library than 10 years of social media posts, generative AI provides better solutions when provided access to factual information specific to the problem at hand. This is what differentiates tools and platforms designed to meet industry-specific purposes from those designed for use by the general public.
Construction, medicine, and scientific research have been slower to respond to the evolution of generative AI. Despite the wide availability of high-quality data within these fields, concerns regarding patient privacy, cybersecurity, and practical regulation and oversight of deployments will likely throttle deployment rates in hospitals and research facilities for quite some time – deterring rapid adoption of AI-driven technology within medical and scientific fields.
While the construction industry still struggles to digitize its data, it may be more open to the rapid adoption of generative AI, and to realizing its potential for industry-wide transformation. Because construction scheduling and sequencing pose unique optimization challenges due to both physical complexity and intricate interdependencies in execution, an AI-driven tool equipped with a quality dataset may prove uniquely adept at proposing ideal solutions. Developers and contractors focused on construction and build of complex projects – such as infrastructure development, megaprojects, complex manufacturing and production facilities, etc. – will benefit most from quick onboarding with generative AI and realize those benefits nearly immediately.
As generative AI opens new and exciting avenues for innovation across many sectors and industries, the construction and development sectors should take note. To shy away from the use of these revolutionary tools based on examples of ChatGPT gone awry is simply to misunderstand the technology at its core as well as the exciting potentials offered by strategic implementation.
By leveraging the power of AI, the construction industry’s most complex challenges can be easily overcome, with immediate and unprecedented benefit to stakeholders worldwide.
###
René Morkos is the founder of ALICE Technologies and is an adjunct professor at Stanford University's Ph.D program in construction engineering. René obtained his Ph.D. in artificial intelligence applications for construction as a Charles H. Leavell fellow at Stanford. He is a second-generation civil engineer with over 15 years of construction industry experience that is divided between industry and academia. Morkos’ professional experience includes working as a project manager in Afghanistan, underwater pipeline construction, automation engineering on a $350 million gas refinery expansion project in Abu Dhabi, ERP system implementations, and various Virtual Design and Construction projects.