Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Same here. Also, since I cannot see anything related to 32k on my API console, does anyone know if the price is the same for gpt4 vs gpt4-32k? In other words, do I use gpt4 for smaller context calls and only use gpt4-32k for longer ones or can I just switch to gpt4-32k for all calls?


The pricing is in https://openai.com/pricing, GPT-4-32K is twice as expensive for all requests, so for <8K context you better use GPT-4 :)

And due to the $0.06/1k input and $0.12/1k output the price for requests can get silly - 31k of context with 1k output will cost (31 * $0.06 + 1 * $0.12) = $1.98 (for a single request).


Yeah the price is ridiculous. 3.5 is basically too cheap to meter and this tends to run up a serious bill in minutes. Meanwhile the website is a really awful way to interact with gpt. So I just stick with 3.5. It works alright for my usecases. Not amazing but acceptable


How is better through the API?


I can use my own client. I don't have to log back in every day or two. I can choose custom pre prompts and set the temperature. It doesn't constantly change from gpt-4 to gpt-3 at the top.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: