My experience with Google Gemini

I have spent the last two months playing around with Gemini Ultra 1.0. Overally, I am quite happy with it. For comparison, I have spent a similar amount of time using ChatGPT (GPT-4). I have also tried Llama 2 and Mistral 7B.

Here are my initial thoughts:

1) Gemini has a much more tolerable output length compared to ChatGPT. Even when I ask OpenAI’s model to talk less, it doesn’t seem to work. Gemini, on the other hand, hits the sweet spot where it is both informative but not too verbose.

2) Gemini’s UI is much smoother than ChatGPT. This part is difficult to nail. You don’t want the user to notice too much latency, but you also do not want a jarring token-by-token animation. ChatGPT displays one token at a time, whereas Gemini spends a couple seconds gathering tokens, then appears to fade-in one word at a time.

OpenAI’s “writing” animation.
Gemini’s “writing” animation.

3) I would be remiss if I didn’t mention “wokeness.” The latest scandal to hit Gemini has been an issue with its image-generation capabilities. For example, when asked to create images of Nazis, it showed Korean women in Nazi uniforms. This is obviously problematic, but it can be remedied and does not change my optimistic outlook on Gemini.

I have personally run into some issues where it censors itself. However, the issues are few and far between, and can be fixed by rewording the prompt.

Gemini censors itself when asked about the Sex column of a dataframe.
The issue is easily resolved by using the word “gender” instead.

4) As a coding assistant, I have found Gemini to be powerful, but slightly less accurate than GitHub Copilot. Recently, I asked a simple question: “What does the sort parameter do when concatenating two dataframes?”

Gemini mistakenly believed that the dataframes were sorted by index. However, after reviewing the documentation, it is clear that the non-concatenated axis is sorted. (E.g., if you concat by row, the columns will be sorted alphabetically.)

Incorrect output. Who knows where that came from.

Interestingly, I ran the code and it gave me the correct result. It makes sense that Gemini would not execute code for every query, but it is amusing that it had the answer right in front of it and failed to notice.

Correct output, after executing the code.

Overall, I am quite happy with Gemini, and I am bullish on Google’s AI push. It should be interesting to see how the next generation of Gemini Ultra competes with GPT-5.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *