I pitted Google Bard with Gemini Pro vs ChatGPT — here’s the winner

Page 2 - Seeking answers? Join the Tom's Guide community: where nearly two million members share solutions and discuss the latest tech.
Dec 10, 2023
4
0
10
Unfortunately not yet. I've written several stories on Gemini and the version included with Bard TODAY is Gemini Pro, which Google says is roughly equivalent to GPT-3.5. This is the year old model that powers the free version of ChatGPT.

Next year Google is launching Bard Advanced which is built using Gemini Ultra. This is the model that Google claims is as good as, if not better than GPT-4.

So right now, Bard with Gemini Pro is roughly equal to ChatGPT free.
Actually, Google did merge parts of Gemini with the free version of Bard on 12/06/2023. I noticed it right away when Bard suddenly became more "academic" and less "conversational" than the day before. See this article in techcrunch for details of this upgrade.

https://techcrunch.com/2023/12/06/g...ll first power,by Gemini's most capable model.
 

RyanMorrison

Prominent
Nov 27, 2023
28
3
585
Actually, Google did merge parts of Gemini with the free version of Bard on 12/06/2023. I noticed it right away when Bard suddenly became more "academic" and less "conversational" than the day before. See this article in techcrunch for details of this upgrade.

https://techcrunch.com/2023/12/06/googles-ai-chatbot-bard-gets-a-big-upgrade-with-gemini-googles-next-gen-ai-model/#:~:text=Gemini Pro will first power,by Gemini's most capable model.
Yes.
Gemini comes in three forms
Ultra, Pro and Nano
Ultra is equivalent to GPT-4
Pro to GPT-3.5
And Nano is a model running locally on Android

It was Pro that Google integrated with Bard

See my reporting here:
https://www.tomsguide.com/features/i-test-ai-for-a-living-heres-why-google-gemini-is-a-big-deal
 
Dec 12, 2023
2
0
10
i responded truthfully to this post and failed to adequately censor the name of the programming language the author decided was an appropriate judge of coding skill. my bad for not censoring the name of the language you decided to use, here's the comment with eight letters replaced with BLEEP omitted to pass your censorship.

just imagine choosing to use a vulgarly named programming language then removing comments which simply use the correct name for the language. what a brave new world, amirite?

You tested code generation with brainBLEEP? seriously? do you even use brainBLEEP for any professional software project yourself (i.e. are you qualified to judge the output?)

I get Rust and Python code from Bard all the time. Three drafts is a huge advantage.

this is not a charitable review to any of the AIs. please repeat your investigation in a serious way with the best possible prompt engineering, this post is lame clickbait humbug, not a sincere review of the SOTA models...I expect much better from Tom's!

anyway, here's why chatgpt and claude are disqualified, and it's exponentially more serious than any performance/iq issue
cartel-microsoft-microsoft-openai-microsoft-github-nvidia-cartel-customer-noncompete-clauses.png


https://i.postimg.cc/MGqPvPz5/carte...nvidia-cartel-customer-noncompete-clauses.png

likewise anthropic claude is disqualified:
anthropic-customer-noncompete-clause.png


https://i.postimg.cc/1tj10hts/anthropic-customer-noncompete-clause.png

Galaxy brain, I know. Anyway, Google Generative AI Terms have nothing remotely like that, so Google Bard wins automatically regardless of any performance difference (which often tilts Bard anyway, especially if you review all the drafts)

Maybe this is the best sign AGI already happened: neither ChatGPT nor Claude would write terms like that:
jurisprudence-gpt-thinks-openai-customer-noncompete-clause-is-wrong.png


https://i.postimg.cc/L6hwkbK8/juris...penai-customer-noncompete-clause-is-wrong.png
anthropic-claude-thinks-anthropic-customer-noncompete-clause-is-illegal.png


https://i.postimg.cc/VNNC5n0p/anthr...pic-customer-noncompete-clause-is-illegal.png
 

RyanMorrison

Prominent
Nov 27, 2023
28
3
585
Yes.
Gemini comes in three forms
Ultra, Pro and Nano
Ultra is equivalent to GPT-4
Pro to GPT-3.5
And Nano is a model running locally on Android

It was Pro that Google integrated with Bard

See my reporting here:
https://www.tomsguide.com/features/i-test-ai-for-a-living-heres-why-google-gemini-is-a-big-deal
I left all questions to Claude 2. It determined what to ask each of the two chatbots based on things AI find difficult or non standard.

This is also not a scientific analysis or benchmarking exercise - which is why I also made subjective decisions. In the end I still think Bard won, even if removing any subjective decisions.
 
Dec 12, 2023
2
0
10
Ryan, it's a crucial topic so I applaud you for making an initial effort and am sorry to have been negative about it and bring up a bunch of legal stuff. That's life these days: the peanut gallery will always find something to pick on.

IMHO, AI evaluation is an art, and according to Pirsig, it's impossible to know quality, so your post here is infinitely more than nothing and it is quite comprehensive and diverse because you tested many different areas of evaluation.

Thank you for thinking and writing about a head-to-head comparison of AIs!

P.S. https://en.wikipedia.org/wiki/Theory_of_multiple_intelligences
might be relevant to your valuable line of inquiry here

To your idea, scientific analysis and benchmarking AI could become a valuable article series on Tom's Guide, especially for AI coding across various popular languages (https://survey.stackoverflow.co/2023/#most-popular-technologies-language-prof), some ideas would be to compare the completeness (placeholders are a major frustration for devs working with AI and https://chat.openai.com/g/g-3mjvrrXZ6-bug-fix-gpt is one I made specifically with the objective to define placeholders and set a goal to reduce them [without any lies about tipping or kittens, mind you] ), also correctness, and performance of the code they write, and the number of messages sent to get a working code example is a measure of the "massive energy waste" problem.

So, yeah, great idea! Think about it like reviewing a CPU or a GPU ... the more we can quantify quality, the easier it is to make informed decisions, and the more helpful the articles will become.

When the GPT store opens up, there will be (already is, really) a "Cambrian Explosion" of GPTs, and that's a good source of material because people will need help choosing from a billion options. Plus, Bard will almost surely wind up with a similar feature (hopefully soon, for reasons noted above)!

Happy Holidays,

Bion