Are you using it enough?
One of the things that has surprised me over the past couple months is noticing that a lot of my friends don't seem to be using ChatGPT all that much. There's nothing wrong with that, but for a service that has dramatically changed my life I find it interesting that other people don't have a similar experience. If I were to hypothesize why this is the case, I think it has something to do with not really being sure where to start. Looking at a blank input box can be intimidating and is not very conducive for thinking creatively, which is very much needed when making the most out of AI.
While I am not a machine learning expert, or someone who has spent their whole life dedicated to understanding the higher order abstractions of a perceptron, I am someone that has spent a good amount of time hacking. Reasoning about how to hack into a system or write the smallest amount of code to make some thing happen has honed a part of my brain on always looking for shortcuts. In my opinion, ChatGPT and the like, have just opened the door to some of the most game changing shortcuts for a lot of people, and not just people who know how to code.
I won’t make you an expert, but I will probably make you smarter
I think the most important thing that I care about in regards to writing here is showing you not just what is happening in terms of the state of the art of AI but also what work has stood the test of time in the field of AI. There was an amazing blog post from Google that I read a number of years ago that was titled rules of machine learning what I found the most compelling about this article was the very first rule read:
Rule #1: Don’t be afraid to launch a product without machine learning.
Adding AI to an application is not going to solve your problems. Effectively using AI means to have a very intimate understanding with the data that you're dealing with and what problem you're trying to solve. Google did not immediately respond with launching Bard to the general public because the problem that they're trying to solve search on a global scale and the cost of AI hallucinations for them is just too high.
AI is not flex tape that you can slap on a problem. Importing LangChain to your app might make for a cool POC, but is that going to scale? Maybe it does work really well today, but what happens in a year when just using the OpenAI API isn’t working by itself?
What I want to write about
Sam Altman made an interesting statement about ChatGPT to Lex Friedman:
I suspect too much of the processing power [of training GPT-4] is going to using the model as a database instead of using the model as a reasoning engine
I want to help people build “reasoning engines”. I want to be a reasoning engine to help transform the problems you have into actionable steps.
I have a laundry list of references that I have collected over the past few years that I am going to be digging through to surface some insights I find interesting.
See you soon!
I want to release an article once every week. No more, no less. I would like to keep everything I write to no more than a 10 minute read, or “byte sized”. I hope that you find what I write interesting, and if there are other things, related to AI, that you would like to see let me know!
You might be interested in https://llm-efficiency-challenge.github.io/challenge ! It encourages participants to train LLMs with a focus on improving reasoning, with the hypothesis that you can do this with minimal compute.