Weekly Editorial

Exploring AI's Influence on Our Industry

Written By Rob Kirkbride, Editor-in-chief, OI Publications • February 5, 2024

The Insider_Weekly_editorial_2.5.24

I recently attended the BIFMA 360° Leadership Conference and the speakers, as usual, were excellent and thought provoking. Since the conference is about preparing industry leaders for the future, it shouldn’t come as a surprise that two of the speakers discussed artificial intelligence and how it will change the world — including our industry.

And while I’m all for technology and embrace it, often as an early adopter, I’m skeptical and more than a bit fearful about AI and all its implications.

A speaker on the last day of the event was Shawn Kanungo, partner at Queen & Rook, a company specializing in AI. He emphasized the impact of AI on culture, business, technology, and individuals. He explained how generative AI will drive change in how people approach work and how innovation is 90% psychology and 10% technology. Kanungo also discussed the importance of being bold and asking bold questions to innovate. He argued that in the future, organizations must either leverage generative AI or risk becoming irrelevant.

“When I come to these conferences, they’re expecting for me to give them some advice or some answers. And the beautiful thing about the world today is that we have all the answers. If you want answers, just ask ChatGPT. But if you want to innovate, you want to be bold, you have to ask bold questions,” he said.

During his presentation, Kanungo showed an image of a office stool that he grabbed from the OFS website and fed through an generative AI engine, which spit out a “redesigned” stool. He also took a picture from Haworth’s website and completely changed the content of the photo using AI. He said every bit of information currently on the web can be replicated.

I challenged him after the presentation and asked him about the New York Times lawsuit against OpenAI and Microsoft for stealing its content and using it to train its chatbots. “Couldn’t you swap out the word ‘replicated’ for ‘stolen’?” I asked. He said that everyone “grabs” stuff off the web. I wasn’t satisfied with that answer, because like the New York Times, in a much smaller way, at officeinsight magazine, we work hard (and pay our writers a lot) to create real content. Just because it is on the web does not mean people can steal it, repackage it and sell it for their own profit, which is exactly what ChatGPT is doing, in my estimation.

Technology tends to develop faster than we can answer ethical questions surrounding these advances. I’ll give you an example.

When I was a reporter at The Ann Arbor News in the early 1990s, I covered a court case involving a couple who had gone through in vitro fertilization, a process where mature eggs are collected from ovaries and fertilized by sperm in a lab. These embryos were frozen and the couple successfully had a child. They later divorced and the woman wanted to have more children using the embryos that were stored in the lab. The man did not. The court was forced to decide if the woman had the right to have more children, even if the man did not. The case brought up interesting ethical questions: Did the woman have the right to have the children? Who “owned” the embryos? Would the man have to pay child support if the woman was allowed to have more children? Did the man have the right to have the frozen embryos destroyed?

My point is this: Technology often moves faster than our ability to deal with it, which seems to be happening with AI as well. I do believe AI is going to be a revolutionary tool for our industry and the world, but shouldn’t we pump the brakes until we can answer some of the ethical questions surrounding it?

AI has the potential to dramatically improve efficiency, make us better at our job and help us solve questions that we can’t solve on our own. It also has the potential to make workers redundant, create “deep fakes” — images so real we won’t be able to tell fact from fiction, introduce bad data and mistakes, automate weapon systems that are beyond our control and create uncontrollable self-aware AI.

Technology is important and growing more so everyday, but our ability to answer these ethical questions have not kept pace. When it comes to AI, maybe we should slow down and think of these questions before we rush headlong into using tools that we don’t fully understand.

Like what you see? Share with a friend.

Join 15K+ industry professionals who receive The Insider each week