On Artificial (Intelligence)
#Intro
ChatGPT crossed a threshold of 100M users a few weeks ago. This by itself makes it a disruptive technology, even if it weren't for the fact that people, myself included, find it incredibly useful.
How do I use it then? To generate frameworks of my everyday code(non-proprietary, of course), and help me code faster. Usually I tell it what I am looking for, and it will spit out a barebones framework/functions of the code, which I can then customize. This works(mostly), and saves me quite a lot of time where I would have to figure out "Oh wait, what's the Javascript array method for x or y again?"
It doesn't work for everything, but handling context works incredibly well.
#Chatbot
OpenAI API on the other hand, doesn't handle context at all. I created two bots for usage in our Linux IRC channels, legacy and wintermute. Legacy uses the curie model, and wintermute uses the latest davinci-003 model. I have found legacy to actually have an opinion, meanwhile wintermute flounders about in "This isn't the right thing to do, you can't do that, it's not allowed! Wait a second, I am just a language model, I can't have opinions. It depends on preferences of x and y and also factor in z!" Meanwhile legacy will tell you "I like apples."
Re: API and Context, I have to submit past 2 prompts and past two responses for it to understand what the following prompt is about. This is a bit of a pain, but it works. To be entirely fair, it will always have n-2 prompts to go from, which works in an IRC room.
#Too Passive
Which is where my problems with GPT models come from. To be clear, I don't care about plagiarism, nor do I care about it taking code from other FOSS projects on Github/Stack or what not. As long as the users are prime beneficiaries here, damn the licenses. We have been promised a voice-assistant for decades now, and I'd rather this future come now, and drag the nay-sayers reeeing and screeching into the future (If only they had some idea of how much work it actually takes currently to make GPT models or image models like SD produce an actually good product).
No, my problem with ChatGPT and GPT models so far are that these models are currently too passive. I undertand that to make them active would require almost a nanny-state (*cough* the UK *cough*) like surveillance and monitoring of your everyday activities, but if there was a model I could self-host without it requiring almost a quarter of a TB of RAM and 4 RTX 4090s, and could give active recommendations, I would be all for it.
Few examples would be:
- I am having a conversation, I could just ask "what do you think, AI?"
- Based on the sensors I've placed around the house (and oh boy, there are a lot when it comes to smart home and zigbee sensors), it could tell me the issues and suggest alternatives/fixes.
- It could automagically figure out a schedule for me based on my calendar entries, meetings, and coordinate my work as well as my personal calendar together.
- It could tell me what to eat based on my diet, and what I have in the fridge. (This was generated automagically by CoPilot, but I think this works lol - although inventorying it when we get groceries/take the food out would be a nightmare)
- Summarize my emails, filter out emails sent to me by marketers and updates, and give me a semi-editorialized version of them. "Dave wanted you to come over to sign the lease, when would be a good time?"
etc.
I'll undoubtedly have more thoughts on this in the following months.
Comments ()