top of page
Writer's pictureW.B. King

Credit Union and Fintech Insiders Share Pros and Cons of ChatGPT/AI-Driven Content

By W.B. King


From journalism to software coding to points in-between, the popularity of ChatGPT has given rise to heated debates regarding the quality and legitimacy of artificial intelligence (AI)-generated content.

In an effort to better understand how nonorganic content is impacting the financial services industry, Finopotamus spoke with five industry insiders who provided respective thoughts on what ChatGPT/AI means for credit unions, fintechs and the public at large.


Preserving Editorial Trust


Joe LoBello, founder and CEO of the New York City-based LoBello Communications, a firm specializing in working with financial services and technology companies, said AI-driven content is making “organic reporting more important than ever before.”

Joe LoBello

Acknowledging that AI has “allowed people in our industry to generate decent-quality content faster,” including some of his firm’s clients that have adopted it for “interesting use cases” aimed at improving productivity, LoBello added that “disastrous consequences” hang in the balance.


“Unfortunately, this revolutionary technology is also driving a significant increase in misinformation. It’s elevating the media’s role as a source of objective, fact-checked information,” he continued. “Companies are using AI to churn out news releases and other materials, which can hurt their brand reputations and raise questions from regulators if they aren’t careful.”


Agreeing that ChatGPT is “all the rage recently,” Agent IQ’s CEO and President Slaven Bilac said its viability in various industries, including media, remains questionable.


“While I have doubts about the immediate usefulness of ChatGPT for serious applications, I do think it makes sense for content providers to leverage it to produce (more) relevant content,” he said.

Slaven Bilac

The San Francisco-based fintech offers financial institutions (FIs) a personal digital platform “supercharged” with AI aimed at improving communication and engagement with members and customers.


“Nonetheless, it is important to preserve the trust the consumers have and provide accreditation to the source as appropriate – human or AI system – to allow consumers to understand who is behind the content and allow them to apply appropriate level of scrutiny/trust to the content,” Bilac added.


Evolving Large Language Models


Approaching every public relations campaign with “complete attentiveness balanced with empathetic understanding,” Mary York, CEO of York Public Relations, has serious concerns with ChatGPT.


The Atlanta-based PR firm is dedicated exclusively to serving the fintech and financial services industry.


“There is a real potential for producing unintentionally biased and inaccurate content. Content may sound plausible and authoritative, but it’s not always factual,” York, echoing LoBello’s stance, told Finopotamus.

Mary York

“AI-driven content also doesn’t always reflect current issues. As an example, we ran tests following the Silicon Valley Bank collapse. Even a week after, ChatGPT had a hard time distinguishing between the collapse and general branch closures,” York continued. “ChatGPT also eliminates unique thought and perspective on issues. Innovation completely goes away at that point. It simply pulls from existing information, which, again, may be inaccurate or biased.”


Another concern, York noted, is that ChatGPT currently sends the same responses to the same or similar question queried by different users. “For instance, two different writers can ask for an article on how credit unions can build a successful deposit strategy and receive nearly identical content.”


Explaining that ChatGPT is based on a large language model (LLM), Glia Technologies Product Marketing Lead Jake Tyler told Finopotamus that new LLM iterations encode more valuable user information.


“This can be achieved by increasing both the amount of textual data used as input to train the model and the number of parameters inside the model used to encode the learned data,” Tyler continued. “Such technology has a wide range of potential applications, especially when multimedia integrations like voice, video and images are included.”


The New York City-based Glia Technologies helps financial institutions reinvent how they support customers in a digital world.

Jake Tyler

Due to early stages of development, Tyler said ChatGPT, and other emerging AI technologies, have limitations.


“They are temporal, meaning they are trained at a singular point in time and may lack information published since that time. They are also expensive, with costs for training LLMs reaching an uptick of $5 million, and not even considering the upkeep required for processing each request,” he continued. “With this in mind, content created by humans is still essential. These AI models can lack ethics, posing a threat to early adopting businesses that invest in and use these systems.”


AI Enhancing Operations


While Karan Kashyap, co-founder and CEO of Posh Technologies, stressed the importance of human interplay with advancing technologies, he noted that AI can process “vast amounts” of data quickly and effectively, which can be utilized in many financial services scenarios.


The Boston-based conversational AI startup, founded in 2018, was a spinoff of lab work conducted at the Massachusetts Institute of Technology where Kashyap earned his undergraduate and graduate degrees.


“This can be particularly useful in a field like tech where information is constantly evolving,” he said. “AI can also help companies better understand their audiences and tailor content to their specific needs and preferences.”


In financial services space, a common use case for AI-driven conversational tools is for customer service, including providing customers with real-time answers to banking questions 24/7, Kashyap said.

Karan Kashyap

“This can improve efficiency and reduce costs for financial institutions, while also improving the customer experience and helping drive revenue growth,” he continued. “Human-created content is still critically important, even in a world where AI-generated content is becoming more prevalent. While AI can help automate certain tasks and streamline operations, there are still aspects of content creation that require the expertise, creativity, and nuanced understanding that only humans can provide.”


In an ideal circumstance, Agent IQ’s Slaven said chatbots should streamline the credit union member experience. He added, however, that it’s difficult for credit unions to deploy advanced chatbots like ChatGPT in member-facing settings.


“Because the purpose goes beyond talking to the member and understanding what they want, it’s also performing an action to help solve the problem, which introduces several hurdles – one of those is human connection,” he continued. “People can not only provide correct solutions and represent the brand but also empathize with the member. As we all know, money is a sensitive subject, and managing finances frequently requires emotional support, something humans alone can provide.”


Glia’s Tyler explained that many AI-driven companies have made significant investments in risk mitigation. But even still, he added ChatGPT is still far from being a perfect user model.


“The current risks might make organizations, especially those in regulated spaces, hesitant about leveraging these technologies for many consumer-facing and mission-critical functions,” Tyler continued. “This is where it becomes increasingly important for businesses to still rely on trusted human representatives who can offer tailored content like research, articles, coding and more – without the risk directly involved.”


Combatting the ‘New’ Fake News Model


When asked if ChatGPT has become so prevalent that publications should provide readers with a notice declaring that the content is organic and not AI-driven (in any sense), LoBello believes it is an important message to convey.


“Readers need to know if AI created the content. That transparency alerts the reader that the content may lack the facts, context, analysis, and critical thinking that a human writer would bring to the table,” LoBello continued. “Ultimately, the essential factors in determining the credibility of content are the reputation and track record of the publication and its writers.

Fake news is going to be a tremendous challenge. We encourage our clients to only engage with reputable, organic news sources and leave AI news to the bots for now.”



In York’s view, ChatGPT offers value in that it can quickly explain “complex topics or inspire new ideas.” But, she added, using this technology in a professionally manner is at best questionable.


“A writer who is tempted to include simple descriptions – citing it as AI-generated – may quickly lose credibility. It’s like citing Wikipedia,” she continued. “To ensure accuracy and authenticity, publications and writers need to use caution if leveraging AI. If the content is inaccurate, it could lead to reputational damage. If the content infringes on intellectual property or is defamatory, it could lead to a lawsuit.”


A better use case for ChatGPT, York added, is gaining a general understanding of a topic, followed by “old-fashioned research and interviews with industry influencers or financial institutions for their unique perspectives.” Using AI to generate full stories, she said, “could be detrimental to any organization, and further fuel concerns around mis- and disinformation.”


An evolving technology, Tyler said AI-driven content is forecasted to become more informed and more widely accepted over time. As such, he said executives in all industries shouldn’t shy away from AI, but rather learn how to leverage it appropriately.


“Everyone should be embracing AI. It can/will hugely impact productivity for media. But, best to think of it as a co-pilot, helping with research and writing, adjusting tone, adjusting length, transforming articles for different placements,” he said, “I think humans are still needed to assess quality and to add creativity.”


Building on Tyler’s thoughts, Kashyap explained that AI is often limited based on the data it was trained on. AI algorithms, he noted, cannot provide unique perspectives, deep analysis and critical thinking.


“Human-created content, or at least human-driven content - even if AI was used as an assistive tool to augment the human - can help to build trust with readers and establish the credibility of the publication,” Kashyap continued. “Financial services readers will want to know that they can rely on the information and analysis provided by these publications, and the presence of human oversight can help to ensure this trust.”

bottom of page