Created: 2023-05-18 00:09
Since I started using ChatGPT probably two and a half months ago, I’ve noticed how I’m gradually, more and more shifting from reaching for Google to reach for ChatGPT when searching for explanations.
Anything that I would’ve googled as a question before, for example “what type of soap can I use to wash a cast iron skillet?” or “what are async traits in Rust?”
I know broadly how LLMs work, it’s a set of formulas that calculate what net next word should be based on the prompt, thus producing a string of words that sounds correct, even though it might not be.
They do not contain a data base to reference, it’s all part of the formulas. That is why ChatGPT makes stuff up all the time, specially when backed into a “corner”.
Knowing that, I wonder why do I prefer to still ask ChatGPT rather than Google?
- Paradox of Choice => Google gives you the “raw” information, it still requires to process it. And it’s a ton of information, although nobody ever looks at the second page, Putin probably hides his nuclear codes there. Still given the amount of choice when googling, it is not a always what we want.
- Human-ness => the notion that you are talking with “someone” or at least something has a different feeling that seen a list of results
- Tailored responses => a single response, custom made for the particular prompt. Even when it’s not always accurate or correct it is relatively easy to check with a Google search [^1]
- Natural language comprehension => it feels more natural to ask ChatGPT rather than go convert into a googleable format
[1]: Most people aren’t fact checking their LLM responses so this is a big problem, since people will consume as if it were true/correct, pretty much like they do with the whole internet and legacy media today.