Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll
Preview
Start a free Courses trial
to watch this video
Find out which chatbot is the king of summarization as we challenge them to condense documents.
This video doesn't have any notes.
Related Discussions
Have questions about this video? Start a discussion with the community and Treehouse staff.
Sign upRelated Discussions
Have questions about this video? Start a discussion with the community and Treehouse staff.
Sign up
For our second test, we will provide the
chatbots with a link to an article and ask
0:05
them to provide a list of bullet points
and a one-paragraph summary of the post.
0:09
This article is from OpenAI's blog, and
it's announcing new features coming to
0:13
ChatGPT, such as the ability to talk to
the chatbot through the mobile app, and
0:18
the ability to upload images and
receive detailed responses.
0:23
For this test,
0:27
I'm using the recently re-enabled web
browsing model from the GPT-4 dropdown.
0:28
I'll paste in my prompt,
0:32
which is please summarize the following
blog post into a list of six,
0:34
eight bullet points, identifying the key
takeaways for a non-technical audience.
0:38
Also include a paragraph that succinctly
captures the core message of the post, and
0:42
then, it includes a link
to the post itself.
0:47
Wow, interesting.
0:50
It seems ChatGPT is looking to improve
the quality of its responses, so
0:51
it's offering me two different responses.
0:54
It wants me to vote on
the one I feel is the best.
0:56
And this is a prime example of how these
chatbots are constantly iterating and
0:59
evolving.
1:03
I'll let ChatGPT finish
generating both responses, and
1:04
we will review them one at a time.
1:08
All right, the first response has
seven bullet points from the article,
1:19
such as new voice and image
capabilities are introduced in ChatGPT,
1:24
enabling voice conversations and
visual interactions.
1:28
Users can snap pictures to have live
discussions about them with ChatGPT.
1:32
The rollout is initially for Plus and
Enterprise users, with voice features
1:37
available on iOS and Android and
image features coming to all platforms.
1:41
For our summary,
1:49
it replied with the blog post announces
the enhancement of ChatGPT with voice and
1:50
image capabilities, offering a more
intuitive interaction experience.
1:55
The initial rollout is for select users,
1:59
with a cautious deployment strategy
ensuring safety and system refinement.
2:02
The new functionalities open doors for
creative and accessible applications,
2:06
albeit with associated risk,
necessitating responsible usage.
2:10
I think this summarized the article
I provided it with perfectly, but
2:14
let's check out response 2.
2:17
Response 2 starts off
with our bullet list, and
2:22
it includes points such as
OpenAI introduced voice and
2:24
image capabilities to ChatGPT,
enhancing its interactive interface.
2:27
Users can have voice conversations and
share images with ChatGPT for
2:31
more intuitive interactions.
2:35
The new features can help in real life
scenarios like discussing landmarks or
2:37
solving math problems with visual aids.
2:40
I like the idea of providing examples on
how the new capabilities can be used.
2:42
So that was cool.
2:46
It concludes with a summary that reads,
2:47
OpenAI is expanding ChatGPT's capabilities
to include voice and image interactions,
2:50
allowing users to communicate and
share visuals with the AI.
2:54
This update aims to enhance
user experience by enabling
2:57
real-world applications,
such as discussing landmarks or
3:00
planning meals based on fridge contents.
3:03
The gradual rollout of the features
reflects OpenAI's cautious approach
3:06
towards ensuring system safety and
3:10
effectiveness, while also paving the way
for innovative and user-friendly approach.
3:12
Awesome, both of these responses were well
written and did a great job at summarizing
3:16
the article I provided, but
I'm gonna vote for the option to the left.
3:21
And that'll do it for this test, let's go
see how Google Bard handles this prompt.
3:26
All right, I'm back in Google Bard,
and I'm gonna paste the prompt and
3:31
submit it to Bard.
3:34
Okay, Bard starts off with
the one-paragraph summary.
3:39
It says, ChatGPT,
a powerful model from OpenAI,
3:42
has gained new capabilities to see,
hear, and speak.
3:46
This means that ChatGPT can now be
used to have voice conversations and
3:50
show images, in addition to its existing
capabilities of generating text,
3:54
translating languages, and writing
different kinds of creative content.
3:58
OpenAI is committed to deploying
these capabilities in a safe and
4:03
responsible way.
4:07
It then provides us the list of bullet
points for a non-technical audience.
4:08
It includes points such as, ChatGPT can
now be used to have voice conversations,
4:12
ChatGPT can now show images.
4:18
OpenAI is committed to deploying
these capabilities in a safe and
4:20
responsible way.
4:24
And there are also some potential
risks associated with ChatGPT's new
4:25
capabilities, such as the possibility
of being used to create deep fakes or
4:29
to spread misinformation.
4:32
Overall, I'd say Bard summary was
easy to read and to the point.
4:34
Now let's take a look at Bing AI and
see what it generates with this prompt.
4:37
All right, I'm back here in Bing AI and
first things first,
4:41
I'll paste my prompt and
hit Enter to submit it to the chatbot.
4:44
Bing AI shows it's searching for
the link I've provided, and
4:49
it now starts to generate its reply.
4:53
It looks like Bing has decided
to start with the summary.
4:55
And it is now moving on to
the bullet list for key takeaways.
5:01
All right, Bing AI is finished, so let's
scroll up and have a look at the results.
5:07
Bing's summary reads, this blog post
by OpenAI announces the introduction
5:13
of new voice and image capabilities in
ChatGPT, which is a generative AI chatbot.
5:18
The new features allow users to
have a voice conversation or
5:22
show ChatGPT what they're talking about,
making the interface more intuitive.
5:26
The voice capability is powered by
a new text-to-speech model that
5:30
can generate human-like audio from text
and a few seconds of sample speech.
5:34
The company collaborated with
professional voice actors to
5:39
create five different voices that
can be used to animate the chatbot.
5:42
The image feature allows users to show one
or more images to ChatGPT, which can be
5:45
used to troubleshoot problems,
explore the contents of your fridge, or
5:50
analyze complex graphs for
work-related data.
5:54
The new features are being
rolled out to Plus and
5:57
Enterprise users over the next two weeks.
6:00
Voices coming to iOS and Android,
opt-in in your settings, and
6:02
images will be available on all platforms.
6:05
ChatGPT then provides six
bullet points from the article,
6:12
such as the new features allow users
to have a voice conversation or
6:15
show ChatGPT what they're talking about.
6:19
And the company collaborated with
professional voice actors to
6:22
create five different voices that
can be used to animate the chatbot.
6:25
The bullet points seem to repeat
the summary almost word for word, but
6:29
that's all right.
6:33
Just something I noticed.
6:34
But let's move on to our last chatbot for
this test, Claude.
6:36
And I'm back here in claude.ai
ready to paste our prompt and
6:45
see how Claude does with this challenge.
6:48
Claude is currently unable
to browse the web, but
6:50
let's see what happens
when I feed it the link.
6:53
It looks like Claude has begun
trying to answer my prompt.
6:58
But if you look at my original message,
you can see Anthropic has added
7:01
a notification saying,
Claude can't visit links.
7:05
Responses may contain hallucinations.
7:08
I'm interested in seeing
where this answer goes now.
7:11
I can see where Claude is going off
track with some of the responses.
7:14
Let's scroll back up to start from the
top, and I'll point a few mistakes out.
7:17
The first bullet point mentions ChatGPT
can now ingest images and videos, but
7:21
this feature is actually limited
to images, at least for now.
7:26
The fourth bullet point mentions
unsupervised learning techniques that
7:30
was never mentioned in
the original article.
7:34
And the last bullet point mentions
Anthropic focused on safety and
7:37
ethics while developing
these new capabilities.
7:40
And while that was their
focus while creating Claude,
7:43
this update was on ChatGPT by OpenAI.
7:47
Anthropic had nothing to do with it and
was never mentioned.
7:50
But, okay, I kind of figured this
would happen, so I have a backup plan.
7:53
Let's go to OpenAI's website and
copy the blog post.
7:56
We'll then bring it back and
paste it inside of Claude's prompt input.
7:59
Remember, Claude can handle much
larger prompts than most chatbots, so
8:03
it shouldn't have any issues processing
an entire blog and generating a response.
8:07
All right, let's submit our new
prompt with the blog post, and
8:15
we'll see how Claude responds.
8:17
Claude seems to be handling the request
much better this time around and
8:19
seems to be staying on
track from what I can tell.
8:22
And that's a part of working with these AI
chatbots, knowing their limitations and
8:25
strengths, and knowing what situations
call for which chatbot or specific model.
8:29
But it looks like Claude has
finished generating its response, so
8:34
let's check it out.
8:37
Claude started with the key takeaways
such as ChatGPT now has voice and
8:38
image capabilities, allowing more
intuitive conversations by speaking or
8:42
showing images.
8:46
It also mentions users can have
back-and-forth voice conversations powered
8:47
by new text-to-speech and speech
recognition models, but that's not all.
8:52
Claude also produced our
one-paragraph summary that reads,
8:56
the blog post announces that ChatGPT now
has voice and image capabilities, enabling
8:59
more natural conversations by speaking or
showing images to the AI assistant.
9:04
It provides details on
the new voice chat and
9:08
image analysis features, which
are powered by cutting edge AI models.
9:10
The post explains OpenAI's gradual
rollout approach to refine safety
9:14
practices before a full
launch of these capabilities.
9:18
It also outlines measures taken
to mitigate potential risks,
9:22
such as impersonation, inaccuracies, and
9:26
privacy concerns related to
analyzing images with people.
9:29
I personally feel this was one of the best
summaries generated next to ChatGPT
9:33
summary, and
9:38
that Claude did a fantastic job once
we copied and pasted the blog post.
9:39
Using a chatbot to summarize articles and
media is a great way to save time and
9:43
become more efficient.
9:48
However, as always be sure to
double-check your AI assistant.
9:49
But that's it for test number two.
9:53
Let's head back to ChatGPT and
start our third test.
9:54
You need to sign up for Treehouse in order to download course files.
Sign upYou need to sign up for Treehouse in order to set up Workspace
Sign up