ChatGPT: What’s Happening
and What’s Next.

Anthony Loss
10 January 2023
5 min read
By now, I’m sure everyone and their brother has told you about this amazing “ChatGPT” solution that is changing the very way we approach AI.  If by some reason you live under a rock and haven’t heard of ChatGPT, I’ll do my best to explain it in a short way.  ChatGPT is a machine learning model, developed by OpenAI, that uses deep learning techniques to generate human-like text. It was trained on a dataset of billions of words, allowing it to understand and respond to a wide variety of text-based inputs. What does this mean?  ChatGPT can be used for tasks such as language translation, text summarization, and question answering, among others.   

Yesterday, I was tasked to send an email to a customer about the differences between two products in my field.  I was in a rush and did not want to take 15-20 minutes to type out a description that I’ve explained many times.  I put ChatGPT to the test, “Explain the difference between [product A] and [product B].”  ChatGPT immediately produced a clear description of each, and a clear explanation of how they’re different which both were articulated in a better way than how my mediocre explanation skills could produce.  This is one of the many ways we can use ChatGPT in our lives. 

I told my cousin about this tool, which he shared with his father.  A week later he told me that his father used the AI to explain what’s been recently going on with the Israeli-Palestinian conflict.  Just asking, seeing an answer, follow-up questions, follow-up answers.  Similar to chatting with an expert via text message. 

Now that I have given lackluster examples of how this thing works, let’s dive into how this is impacting the world and its consequences.  First, I’m going to preface that I’m biased towards AI.  I’m a Solutions Architect that has spent years in the cloud learning about industry disruptive solutions.  I enjoy educating my customers on new ways to solve problems and increase agility with curveball solutions built on AWS.  Yes, this includes AI/ML. 

Having said that, I will also talk about the controversial implications that are rising everyday as this product is being used more frequently. 


How it works and how it can help us. 


First thing to understand if that “GPT” has been around for a while.  GPT is short for Generative Pre-trained Transformer and has been in the works for years.  In fact, ChatGPT is based on the GPT-3 (version 3) model, potentially amongst other models as well.  We do not yet know the interworking of GPT-3 specifically as this is not yet open source, but OpenAI has its GPT-2 model open sourced and we know its trained on a dataset of over 40GB of text data, including a diverse range of internet text such as articles, blogs, comments, and websites.  We can accurately guess that GPT-3 uses similar architecture of GPT-2 that includes the use of Natural Language Processing (NLP), next word prediction (which is known as Decoding), and artificial neural networks. 

ChatGPT is an awesome solution that lets the general public use it for a ton of cases.  However, what we’re going to start seeing (because of the explosion of ChatGPT) is the use of GPT models trained for particular businesses. For example, Lexion, a contract management system, has its own trained GPT-3 model to help lawyers generate contracts! For example, a user can ask Lexion to propose language related to a specific clause in a contract, and it will instantly produce a draft.  Or an assistant can use the tool to quickly summarize clause language for their customers. See more here.  Another cool example is how Keeper Tax, a San Francisco startup, uses GPT-3 to interpret data from bank statements and find tax-deductible expenses. 

It's clear that OpenAI’s ChatGPT has brought the attention of GPT models to the everyday person. These types of solutions can have terrific impacts in various fields for good cause such as: 

Education:     GPT can be used to create personalized lesson plans and educational materials, making it easier for students to learn at their own pace.

Healthcare:     GPT can be used to analyze large amounts of medical data and assist doctors in diagnosing and treating patients. It could also be used to generate personalized care plans and improve patient outcomes.

Environmental conservation:     GPT can be used to analyze large amounts of data on environmental issues and help identify patterns and trends, which can inform conservation efforts.

Disaster response:     GPT can be used to quickly analyze large amounts of data, such as satellite imagery, to identify areas affected by natural disasters and aid in relief efforts.

Social Good:     GPT can be used to generate chatbot for mental health, addiction, and other sensitive topics, providing instant help and support to people in need. 

The possibilities for good are endless with the wider adoption of GPT models.  However, with everything good in this world, there usually is a darker side that sits adjacent to good…  


Talk about introducing a cargo ship full of concerns. 


The possibilities of GPT are terrific and can be used for all sorts of good.  But what about ChatGPT in the general public?  For kids, everyday jobs, pay-per-click businesses, and more.  

“Write me a chapter 4 summary of To Kill a Mockingbird.  Then, describe how the main conflict relates to the use of foreshadowing for further chapters.” This is how students are using ChatGPT to do their homework.  How do teachers defend again this type of plagiarism?  Even if the student doesn’t copy and paste this direct answer (which still would be hard to detect given it’s a model and produces a different answer every time you ask a similar question), they can read this short output and relay the work at 10% of the energy the teacher intended the work to be.  Think about all the different use cases for this: history, math, chemistry, biology, and on and on we can go. 

I asked a tenured teacher about this dilemma.  As they showed initial concern, I was surprised by the answer she gave me.  “I know how my students write, and how they explain their arguments.  I will be able to tell if the answer is real understanding, or artificial outputs.” While I can see this being true in the interim, what happens when the next year of students come in?  How will you initially know what’s theirs and what’s ChatGPT? Also, if professionals are allowed to use this as a way of finding answers, why wouldn’t it be encouraged in schools?  Isn’t that what education is about, to produce individuals who are ready to make real-world impacts? 

Remember the real-life example I gave in the beginning of this blog?  Notice anything?  I said “[product A] and [product B]”.  Why would I do this? If you’re guessing because I didn’t want the recipient of this to know that I used ChatGPT to provide my answer to them, then you’d be exactly right.  What’s the morality of this?  Is it ethical to have ChatGPT provide work related tasks?  I think so.  But where’s the line?  Should we allow defense lawyers use ChatGPT to write their opening statement?  How about movie producers using ChatGPT to write the screenplay?   

Google is in DEFCON 1. At this point, it’s probably clear to see that it’s easier to have ChatGPT tell you about the best Italian restaurants in the lower east side of Manhattan than to Google and surf through pages.   Not only this, but what about companies like TripAdvisor, yelp, WebMD, or any other firm that pays Google to have its results near top of page?  This causes concern about how the results of the ChatGPT answers be regulated or monetized.  If the answer is to have this monumental dataset be delivered with references to every conversational answer, then it abstracts away the very point of the AI solution:  conversational.   


How we move forward. 


Concern, good, concern, good, concern.  But you know what?  All disruptive innovative solutions cause concern.  The use of a “world wide web” to get free information caused major uproars in firms, libraries, bookstores, and services that required payment for the same use.  Also, the smart phone.  Students with computers in their pockets, you bet this caused concern with educational organizations.  I can go on and on, but the fact is that if we stop innovating because we’re concerned of the outcome, then we stop growing as a species.   

We move forward carefully, but we move forward.  We understand and acknowledge the power of GPT tools like ChatGPT, and we provide considerations for ethical implications, regulations to ensure responsible use, human oversight, and most importantly, transparency.  Organizations and individuals using GPT technologies should be transparent about their use of GPT technology. 

We should all be excited as the next wave of revolution hits the world.  In fact, it might be a good idea to see what Function Factories is going to do with GPT technologies in the near future…