With the introduction of tools such as Midjourney and ChatGPT, generative AI has burst into global awareness—and is quickly gaining momentum. First driven by the younger generation’s exploration of opportunities for use in areas such as art and content creation, public interest has blossomed into media buzz. With the market for generative AI estimated to reach more than $109 billion by 2030, conversations about this technology have swiftly shifted to focus on wider application—and ways in which content generated by AI-driven tools might pose previously unconsidered challenges.
While most IT leaders agree that artificial intelligence (AI) is a global game-changer, some fear that generative AI and machine learning tools may be unlocking Pandora’s box, especially with regard to the presentation of written content or information that circumvents standard vetting processes for publication.
Concerns centered on the potential abuse of generative tools for the production of pseudo-scientific information, factual-sounding (but erroneous) “informative” content and so forth are not unfounded. However, there’s an important distinction that must be drawn between the information presented by tools such as ChatGPT and tools designed for more specific use cases or industries.
Take, for example, generative AI for design purposes. Over the last several years, AI and machine learning technologies have found practical use in product design, architecture and engineering (A&E) and industrial automation. Successfully incorporated into tools such as Autodesk’s Fusion 360 platform, generative AI has quietly revolutionized the process of design itself, enabling iterative digital modeling, which unlocks unprecedented innovation.
What differentiates valuable machine-generated results from the substance-lacking (and often, blatantly incorrect) output of tools such as ChatGPT is simple—the datasets upon which they rely. It may help to think of those datasets as enormous libraries. ChatGPT, for example, responds to an instructional inquiry by referencing an enormous language library, a dataset of text and written content the equivalent of years of historical internet content. But bigger is not, in this case, better; one only has to read through one or two Twitter threads to realize that the majority of public content is not necessarily of high quality—let alone factually correct. In the familiar phrasing of computer scientists worldwide, garbage in equals garbage out.
In contrast, generative AI and machine learning tools trained on datasets of highly vetted information specific to their unique purposes will produce quality results relevant to their use. To carry forward the library analogy, imagine a high schooler tasked with answering a tricky engineering question. In the first case, the student is given access to a library containing ten years of social media content—and in the second case, access to the library of their university's engineering department. With access to a high-quality dataset, even a mediocre student is more likely to produce a high-quality response.
Understanding this concept is key when assessing the value of generative AI and machine learning for specific use cases. Industries with the potential to benefit the most from the use of generative technologies are those in which access to specialized data intersects with complex needs. Speedy adoption into verticals such as A&E, industrial design and product development has already proven this to be true, and these industries are already reaping the rewards of AI-driven innovation and development.
Industries that still stand to benefit greatly from generative AI’s evolution include construction, medicine and scientific research. Within medical and scientific fields, AI-driven tools are unlikely to see speedy adoption. Despite the wide availability of high-quality data within these fields, concerns related to patient privacy, cybersecurity and practical regulation and oversight of deployment will likely throttle deployment rates in hospitals and research facilities for quite some time.
In contrast, the construction industry may struggle to digitize its data but be overall more receptive to the swift onboarding of generative AI (and to realizing its potential for industry-wide transformation). The physical complexities and critical interdependencies of construction scheduling and sequencing pose unique challenges to optimization, but these challenges are easily surmounted by AI-driven tools fueled by qualified datasets, and the immediate benefits are undeniable.
Adoption of these technologies within the building and development sectors hinges on the digitization and parametrization of vast amounts of data relating to built environments. In its fledgling stages, manual entry of project details is the current reality for most developers. Still, AI-driven tools for construction optimization are already unlocking pathways to faster, safer, more sustainable construction. As these private knowledge bases and datasets evolve, so will the world of construction as we know it, as builders and developers will be able to access the wisdom of all previously executed projects to seamlessly (and instantaneously) optimize current and future endeavors.
In short, generative AI is revealing new and exciting pathways to innovation—across many sectors and silos of industry. To shy away from its use based on examples of ChatGPT gone awry is simply to misunderstand the technology at its core, as well as the exciting potential for the use of that technology when wielded wisely.
Read the article as published on Forbes here.