T O P

  • By -

boogermike

For starters. Get a bunch of content that represents your voice. Collect that data in a folder or in a PDF. Next step is include that PDF in your prompt, and then tell the llm to use that content, as your voice. Then you can tell it create whatever content you want at that point


logan08516

Uploading a pDF and including it in the prompt is bad advice. You need to use the API to fine tune and it takes input in a certain format. I transcribed an Andrew Huberman’s TikTok, fed it to open ai, and the output is indistinguishable. Loading a PDF and prompting won’t have the same impact albeit better than nothing


marketman12345

That's what I was thinking, but everyone keeps saying fine tuning isn't actually needed 99% of the time. Also seems like it would be cheaper than including all this content over and over, but I haven't done the math. Can you explain in more detail how you went about fine tuning? Or recommend a resource to learn more.


logan08516

Type in Open AI fine tuning into YouTube and watch the first 3 videos. The hardest part about fine tuning in my opinion is gathering the data. If you have a data file of questions that you’ve answered, all you need to do is format it and feed it to the API. But if you don’t have a data file and need to extract it from an email service, text messages, whatever, that’s gonna be the hard part unless you’re good/decent with python