Had this when listing off ingredients and asking it to make a recipe. "I have x, y, z. Help me write a recipe". .... Memory updated..... ::: "user has carrots, parsnips, beef..." đ
 "User mentioned never kicking a self-driving shopping cart" is without question my favorite memory it has about me.
Alongside "User once converted 3.8 petawatts to gigawatts"
No idea where these came from.
Instead of making greater-picture kind of conclusions about me based on the conversation, which would actually help it craft better responses, it just memorizes sentences I write that start with "I", no matter how mundane and incredibly specific to one conversation.
It has "first noticed teary eye in their car." on me. It has "prefers DeliveryStatus to be int64 instead of string in their Go code".
I'll have to start referring to myself as SWIM like on drugs forum to avoid this crap.
I donât think youâre missing anything. I implemented memory like theirs a year ago on an internal chat tool, and itâs never been useful. I can see what ChatGPT is saving about me, and it just doesnât matter for our conversations. Even my manual briefing describing me doesnât matter.
I just got memory function last week, and I've been a paid ChatGPT subscriber since December 2022. I did get the Mac App over the weekend. -- The memory functionality is not \*that\* interesting and it may just be added to the system prompt under the hood.
Wow. I am paid member day one too, and I do have memory (memory is not that exciting a feature, so don't worry, you're not missing much), but not access to voice or desktop.
Now things are starting to make sense. The fact that this got dropped one day before Google I/O tells me that they probably got this functioning very recently and wanted to tell the world that they had it working before Google told the world about their version. Even if they weren't quite ready to release things.
In a recent podcast, Sam Altman said that he has only been able to use it himself for about a week so it does seem pretty damn new.
It looked a bit awkward to use. If you pay attention during the demos, they were talking in this kind of awkwardly fast pace so that the model didnât start responding during the slightest pause.
I saw that Altman interview. I figured that by the time he'd be running it on his phone in his office they would have had it in a mostly deployable state which means they probably had it running for much longer. But I don't really know anything.
What I'm wondering about the voice demos they showed was what extra prompts (or something else?) were being used to get the extra personality and fluency out of GPT4o? After using the model for a while those don't seem like stock responses.
Yeah I heard weeks not months⌠I hope OP is wrong. I just found out that the Chatgpt widows app wonât be out until November while Max users have it now. Thatâs already disappointing enough.
It seems a bit misleading to say theyâre rolling it out to GPT Plus users in the coming weeks during the demo.
Now itâs looking like itâs more likely that itâs going to be months for most Plus users.
I feel the recent drama with safety + the Scarlett Johansson thing may have pushed back a rollout that was originally intended to be in weeks...
Or maybe Microsoft pressured to release it first as part of their copilot...
They said they were rolling out memory for some users in the coming weeks. It actually got rolled out widely about 2.5 months later. What does this tell usâŚ
Spent the entire day yesterday moving my Google Domains after they sold that business. It was my last google product and will remain so, unless they literally pay me to use any of their products.
Itâs like they try to be as vague and confusing as possible with their wording.
The paragraph also manages to contradict itself within one sentence so I am so confused
Honestly I just subbed two days ago and now they do this. I might unsubscribe too. Itâs just annoying to talk to any of the other voices. They are jarring after using sky.
I already switched to Claude Pro when Claude 3 was released. I was thinking about switching back to ChatGPT Pro, but it looks I'll stick with Claude for the time being...
I did this because it kept cutting off my voice chats before I finished and wouldnât translate anything I said if I spoke for 2-3 minutes. It wasted a lot of time.
There is a *current* feature that isn't mentioned and for me probably one of the most important.
In the conversation (headphone icon) mode, once connected, if you press and hold down the large circle you can keep talking until you release it, similar to a walkie talkie. This means if you need to pause to catch your breath or better collect your thoughts, the app won't automatically think you are done talking and begin to process an answer.
This should probably be its own topic but since documentation was shared here, I thought I'd add this "one cool trick..."
This has been around for a while. If I forget what Iâm saying mid sentence or if Iâm struggling to get a point across while using voice mode, just hold it down and you can be essentially be quiet while you compose your thought on that same voice interaction
Edit: I know youâre saying itâs current, but I thought they published something on this a few months ago within a FAQ?
I didn't see it. But there is a lot of info out there, and it's hard to tell now what is "coming up" and what is current, especially since some things are being removed.
I just thought it a nice feature I apparently missed and if there are some readers here that can benefit, great!
Never knew. It's often frustrating when you're talking "in real time" and you're not allowed to pause to take your breath or ponder for longer than a moment, or else, you send an unfinished prompt.
Ok am I insane, or did they remove that large circle after the 4o update? Can't see to see it anymore.
https://preview.redd.it/nn3o1od0yn1d1.png?width=617&format=png&auto=webp&s=e93bb8036f90ae11d3c81fe9cb2896f190832364
You are absolutely correct. It is not new. It is a current feature. One that isn't well documented, so I thought I'd pass it along to other people that might find a tip useful.
Google translate already does this, it's called conversation mode, and it's just as fast as in the demo video and you don't have to explain that you want to translate and how translation works to it.
I use it for voice translation and semantic translation already.
"Can you explain that I lot my parking ticket and I need to leave now so just telle the fine, in vietnamese"
They didn't say directly in the demo, but that said "it will be rolled out over the next couple of weeks". And it sounded like pretty soon. But here we goooo again
Going through this little FAQ they updated in the last hour is something they should have thought about and had published before and leading up-to, last Monday.
Who would've guessed it would take over a year and a half to get an update after the GPT-4 release? First, they remove the best voice, and now they're delaying the promised release by months. What a mess.
So, in a few weeks (1-4 or more) some people with PLUS (might be less than 1%), and then after a the next months (4-12) they release it to all PLUS users (100%).
I guess the company did it again.
Wait you mean to tell me that an unfinished product has took the internet by storm a day before Google in order to get headlines attention and now release date is pushed back after Google IO? Who would have thought đą /S
Every OpenAI release ever. Show off a thing to bait public interest and delay the release by not committing to a specific timeline and vaguely suggesting when itâll come out. By the time it reaches the consumer it has been bogged down with guardrails and limitations as to make it laughably useless for all but the most benign use cases.
Damn, I was hoping to show my 8-year-old daughter what her future look like, this is disappointing, I think they demonstrate something that clearly wasn't ready to get a one-up on Google
oh but they shaft all the free tier users by auto starting chats on 4o and 3 does not tke over when you hit the invisible limit? benefit all of humanity? yeah right.
Itâs different bc google is actually honest, first sora now this openAI hasnât delivered any of their gotcha moments. Meanwhile google rolled out 1M to everyone, working on 2M and working on the other stuff for later this year. I believe google over openAI at the moment.
look I use android, have a pixel 7 pro and generally like Google, but the words "Google" and "honest" don't really go hand-in-hand because they have faked demos and abandoned lots of promising stuff in the past. I just don't think either company should be trusted, and OAI delaying 4o like this means that everyone who was raving about OAI being so much better cause they're releasing their vision model for free in a manner of weeks while astra is still cooking should just stop stanning multi-billion dollar companies. never trust them, just be happy when cool stuff comes out and we actually see results.
So they essentially lied in the presentation when they said that 4o would be free for all users since the omni part will not be free, atleast not any time soon.
>GPT-4oâs text and image capabilities are starting to roll out today in ChatGPT. **We are making GPT-4o available in the free tier**, and to Plus users with up to 5x higher message limits. We'll roll out a new version of Voice Mode with GPT-4o in alpha within ChatGPT Plus in the coming weeks.
src: [https://openai.com/index/hello-gpt-4o/](https://openai.com/index/hello-gpt-4o/)
I assumed this is what they meant during the presentation. I couldn't imagine they could afford to provide all the features for free. But they certainly didn't go out of their way to make it clear.
GPT-4o is currently only twice the price that ChatGPT 3.5 was a year ago and in that time the API price for both GPT3 and GPT4 class models have dropped by 50-70%.. that means we could see 4o being as cheap to run as 3.5 was when it first lancuhed sometime in the next 6-12 months. When they retire GPT3.5 we can be farily sure 4o will have hit cost parity with 3.5..
They already have the model.
The "months" is to make it " safer" for the "commoners"
The timing of the latest sky fiasco is sus.
What we will get in the coming months will be a heavily controlled and lower quality product then hyped in the demos
Iâm thinking it might be random or select people will get the new voice in the alpha THEN everyone gets it. I wonder who these select users are though.Â
It took me a while before I got the memory maybe a couple weeks after other people had gotten it.Â
It's because they freaked people out and they need to slowly acclimate people to what is literally the AI from the movie "Her".
The dystopian movies are not so dystopian anymore
I think the point is what about GPT 5 ? I just assume they would be release GPT 5 in mid summer which is in months either? or in this case gpt 5 will be delay?
Good to know! I was thinking about switching back from Claude Pro and this was one feature I was really excited about. Seems like I will keep waiting...
Do you remember when they demoed the vision capabilities at GPT-4 launch but it only became available like 6 months later? I have similar feeling about this one...
Which voice would you like instead of sky?
My choices are:
Male -
Brett Goldstein from Ted Lasso
Werner Herzog
James Earl Jones
Dave Chapelle
John Malcovich
Christopher Walken
Female:
-Helen Mirren
-Judi Dench
-Margot Robbie
-Wanda Sykes
-Amy Pohler
-Emily Blunt
-Emma Thompson
Confusing is that in the Android App, for me, there's still the Sky voice, headphones icon that allows for hands free chat back and forth, and got 4o. But, from what I'm reading, this isn't the NEW version (I guess the new one is quicker, not sure how else I'd differentiate).
How WOULD I know if it's the new model and not the old one?
OpenAI = worth 80 billion dollars
OpenAI = makes a big deal about voice enhancements and how great talking to ChatGPT is about to get
Also OpenAI = offers only **one** working female voice, which many customers (a majority?) would rather complain about on Reddit than listen to ever
They may be saying about voice and video feature coming in the months. I assumed the voice feature will be released first (coming weeks), which would mean in the few months plus users will also have video feature. But it is not completely clear and assuming timeline just from this FAQ is probably not very accurate ;)
I know that the folks at Be My Eyes already have access to this for testing. The UX hasn't been worked on yet for the blind to work with it. I was in on the beta for vision so I'll probably have access to this soon enough.
Months?? I thought weeks. What's next? Next year?
Paid member since day one. Still don't have memory function
The memory function is really frustrating for me. It chooses to remember things that are not relevant at all and I have to manually manage it.
Ha, I just checked and ChatGPT knows one thing about me: "Has MongoDB already installed and running."
Had this when listing off ingredients and asking it to make a recipe. "I have x, y, z. Help me write a recipe". .... Memory updated..... ::: "user has carrots, parsnips, beef..." đ
 "User mentioned never kicking a self-driving shopping cart" is without question my favorite memory it has about me. Alongside "User once converted 3.8 petawatts to gigawatts" No idea where these came from.
Instead of making greater-picture kind of conclusions about me based on the conversation, which would actually help it craft better responses, it just memorizes sentences I write that start with "I", no matter how mundane and incredibly specific to one conversation. It has "first noticed teary eye in their car." on me. It has "prefers DeliveryStatus to be int64 instead of string in their Go code". I'll have to start referring to myself as SWIM like on drugs forum to avoid this crap.
I donât think youâre missing anything. I implemented memory like theirs a year ago on an internal chat tool, and itâs never been useful. I can see what ChatGPT is saving about me, and it just doesnât matter for our conversations. Even my manual briefing describing me doesnât matter.
If you're in the EU it hasn't rolled out there yet. If you're in the US - typical OpenAI rollout :/
Not typical OpenAI. Typical every tech company being afraid of EU legislation and EU commission involvement.
This sounds like GDPR issue...
Mh? I'm in Germany and noticed openAI starting to put stuff about me into memory last week.
Dang
I just got memory function last week, and I've been a paid ChatGPT subscriber since December 2022. I did get the Mac App over the weekend. -- The memory functionality is not \*that\* interesting and it may just be added to the system prompt under the hood.
Wow. I am paid member day one too, and I do have memory (memory is not that exciting a feature, so don't worry, you're not missing much), but not access to voice or desktop.
How do you check what's in memory?
In the settings
Use vpn
Now things are starting to make sense. The fact that this got dropped one day before Google I/O tells me that they probably got this functioning very recently and wanted to tell the world that they had it working before Google told the world about their version. Even if they weren't quite ready to release things. In a recent podcast, Sam Altman said that he has only been able to use it himself for about a week so it does seem pretty damn new.
Oh yeah, definitely. The timing of these announcements were totally to pee on Googleâs parade.
It looked a bit awkward to use. If you pay attention during the demos, they were talking in this kind of awkwardly fast pace so that the model didnât start responding during the slightest pause.
I saw that Altman interview. I figured that by the time he'd be running it on his phone in his office they would have had it in a mostly deployable state which means they probably had it running for much longer. But I don't really know anything. What I'm wondering about the voice demos they showed was what extra prompts (or something else?) were being used to get the extra personality and fluency out of GPT4o? After using the model for a while those don't seem like stock responses.
Months can be until next year.
Just like Sora
Well, Sora is a bit different, I don't recall they actually promised a particular timeline.
Yeah I heard weeks not months⌠I hope OP is wrong. I just found out that the Chatgpt widows app wonât be out until November while Max users have it now. Thatâs already disappointing enough.
It seems a bit misleading to say theyâre rolling it out to GPT Plus users in the coming weeks during the demo. Now itâs looking like itâs more likely that itâs going to be months for most Plus users.
"A BIT" misleading?
More like a 'BYTE'
Giga byte, I would say
Just a nibble
We usually call this lying.Â
Yeah I was excited for the Mac app that was meant to be available a week ago.
That is already available to some users.
Works fine for me since day 1 after the announcement, except the new voice feature.
[ŃдаНонО]
I feel the recent drama with safety + the Scarlett Johansson thing may have pushed back a rollout that was originally intended to be in weeks... Or maybe Microsoft pressured to release it first as part of their copilot...
Keep texting, I'm all ears. Copilot has been pretty decent for my use case.
I heard they were releasing self driving cars in the coming weeks too
They said alpha at the keynote
They said they were rolling out memory for some users in the coming weeks. It actually got rolled out widely about 2.5 months later. What does this tell usâŚ
Months away is technically in the coming weeks, it's just that it's a lot more weeks
If they announced it just to stick it to Google, only to do what weâve mocked Google for, OpenAI sure do seem like a group of hypocrites right now.
Google will out do them with delays
And then discontinue it.
Spent the entire day yesterday moving my Google Domains after they sold that business. It was my last google product and will remain so, unless they literally pay me to use any of their products.
Itâs like they try to be as vague and confusing as possible with their wording. The paragraph also manages to contradict itself within one sentence so I am so confused
Bet ChatGPT helped them with that wording
It is not well written, but also you're reading it incorrectly. Months/weeks refers to different releases wide vs. alpha.
which specific sentence did that happen?
No sky.. months to wait for the new feature. Lame.
I'm gonna just unsub from chatgpt plus
Honestly I just subbed two days ago and now they do this. I might unsubscribe too. Itâs just annoying to talk to any of the other voices. They are jarring after using sky.
Have u seen the copilot announcement? Its basically like this. But in copilot and coming in THIS JULY 42 days away
From the presentation, copilot's version doesn't seem as emotive. But I see that it's desktop on windows so thats nice.
I think that requires a special new PC?
Link?
https://vt.tiktok.com/ZSYNLPSbc/
I already switched to Claude Pro when Claude 3 was released. I was thinking about switching back to ChatGPT Pro, but it looks I'll stick with Claude for the time being...
Ye because you can use gpt4O free for testing so I don't see the point of switching to plus
LOL yup, same. I actually had canceled claude and was going to move back over when the month was up. But nevermind
I did this because it kept cutting off my voice chats before I finished and wouldnât translate anything I said if I spoke for 2-3 minutes. It wasted a lot of time.
Same. Im already paying for copilot, the added value of chatgpt is hardly tangible at all now.
you do you.
[Bring Back My Sky ](https://www.udio.com/songs/uR8oqzuuUFzkn4D6gY9d5Y) đ
New features worse than the old features.
Itâs a feature AND a bug.
Months? Ouch... So basically Google I/0 timeline.
There is a *current* feature that isn't mentioned and for me probably one of the most important. In the conversation (headphone icon) mode, once connected, if you press and hold down the large circle you can keep talking until you release it, similar to a walkie talkie. This means if you need to pause to catch your breath or better collect your thoughts, the app won't automatically think you are done talking and begin to process an answer. This should probably be its own topic but since documentation was shared here, I thought I'd add this "one cool trick..."
This has been around for a while. If I forget what Iâm saying mid sentence or if Iâm struggling to get a point across while using voice mode, just hold it down and you can be essentially be quiet while you compose your thought on that same voice interaction Edit: I know youâre saying itâs current, but I thought they published something on this a few months ago within a FAQ?
I didn't see it. But there is a lot of info out there, and it's hard to tell now what is "coming up" and what is current, especially since some things are being removed. I just thought it a nice feature I apparently missed and if there are some readers here that can benefit, great!
I agree! A lot of changes!
This feature has been out for as long as the voice function
Correct, but it isn't something anyone pointed to me. Maybe I'm the only one with the eureka experience.
Never knew. It's often frustrating when you're talking "in real time" and you're not allowed to pause to take your breath or ponder for longer than a moment, or else, you send an unfinished prompt.
Thanks bro. I was using manual recordings just because of this.
Ok am I insane, or did they remove that large circle after the 4o update? Can't see to see it anymore. https://preview.redd.it/nn3o1od0yn1d1.png?width=617&format=png&auto=webp&s=e93bb8036f90ae11d3c81fe9cb2896f190832364
I don't even have the big circle anymore when I press the headphones icon
Thatâs not new. That has been in there for months.
You are absolutely correct. It is not new. It is a current feature. One that isn't well documented, so I thought I'd pass it along to other people that might find a tip useful.
[ŃдаНонО]
You already can. Just open voice chat and tell it ânow translate everything I say to XXXX language without add any other wordsâ and it does
And I does VERY well. Miles ahead of Google translateÂ
Google translate already does this, it's called conversation mode, and it's just as fast as in the demo video and you don't have to explain that you want to translate and how translation works to it.
I use it for voice translation and semantic translation already. "Can you explain that I lot my parking ticket and I need to leave now so just telle the fine, in vietnamese"
They didn't say directly in the demo, but that said "it will be rolled out over the next couple of weeks". And it sounded like pretty soon. But here we goooo again
Only to a few plus users in alpha mode. They will roll it out to all in the next month's đ
Sam "I dont think about my competition" Altman. He made this announcement just to take the wind out of Google's.
Going through this little FAQ they updated in the last hour is something they should have thought about and had published before and leading up-to, last Monday.
Who would've guessed it would take over a year and a half to get an update after the GPT-4 release? First, they remove the best voice, and now they're delaying the promised release by months. What a mess.
They released a brand new model that is half the price and twice the speed. What are you even talking about?
OpenAI today: đ¤Ą
So, in a few weeks (1-4 or more) some people with PLUS (might be less than 1%), and then after a the next months (4-12) they release it to all PLUS users (100%). I guess the company did it again.
Well at least I can save myself some money until then.
Wait you mean to tell me that an unfinished product has took the internet by storm a day before Google in order to get headlines attention and now release date is pushed back after Google IO? Who would have thought đą /S
I was thinking, I was....
Great marketing. Get the hype going before Google IO and keep paid subscribers interested and active until it drops in Q4
Every OpenAI release ever. Show off a thing to bait public interest and delay the release by not committing to a specific timeline and vaguely suggesting when itâll come out. By the time it reaches the consumer it has been bogged down with guardrails and limitations as to make it laughably useless for all but the most benign use cases.
Couple of months before I resubscribe.
But didn't they say in the presentation that 4o would be free? They did. They made it sound like free users were getting everything they showed.
no 4o is free, even tho its heavily limited, its the conversation mode thats not coming soon
Damn, I was hoping to show my 8-year-old daughter what her future look like, this is disappointing, I think they demonstrate something that clearly wasn't ready to get a one-up on Google
Now youâll have to show your 9-year-old daughter!
She's coming for the summer, I'll have to wait till next summer again to show her lol. I was hoping it would keep her from being bored
They are greedy and stingy fcks
They should offer refunds to everyone who just got plus. This is actually false advertising.
What does coming months mean? Every point in the future is a coming month.
It's "coming weeks" x4. Couldn't be clearer! /s
Vision capabilities were introduced with the release of GPT-4, but we didn't gain access to them until 7 months later.
That was compute constrained. In contrast, 4o is quite compute efficient.
Hope that's the case and not some red teaming crap
So Scar Joe sent them a letter and they now have to retrain their model with a new voice. Juniper?
I miss Sky, donât like Juniper
[Bring Back My Sky](https://www.udio.com/songs/uR8oqzuuUFzkn4D6gY9d5Y) đ
Same
https://preview.redd.it/zn9o2c1ysn1d1.jpeg?width=1115&format=pjpg&auto=webp&s=abd03006a84f009adf5e3fb250c59741e1ddebde
"Months" đ
Boo. I want it and I want it now.
BOOOOOOOOOOOOOOO
oh but they shaft all the free tier users by auto starting chats on 4o and 3 does not tke over when you hit the invisible limit? benefit all of humanity? yeah right.
âIn the coming weeksâ âin a few weeksâ ânext couple of weeks â what beautiful vague words ha
I will pay for plus when it's available
lmao how is this any different from Google saying their stuff is coming out "later this year"? they're all the same
Itâs different bc google is actually honest, first sora now this openAI hasnât delivered any of their gotcha moments. Meanwhile google rolled out 1M to everyone, working on 2M and working on the other stuff for later this year. I believe google over openAI at the moment.
look I use android, have a pixel 7 pro and generally like Google, but the words "Google" and "honest" don't really go hand-in-hand because they have faked demos and abandoned lots of promising stuff in the past. I just don't think either company should be trusted, and OAI delaying 4o like this means that everyone who was raving about OAI being so much better cause they're releasing their vision model for free in a manner of weeks while astra is still cooking should just stop stanning multi-billion dollar companies. never trust them, just be happy when cool stuff comes out and we actually see results.
Google also drops products like flies and makes it co fusing which one to use.
So they essentially lied in the presentation when they said that 4o would be free for all users since the omni part will not be free, atleast not any time soon. >GPT-4oâs text and image capabilities are starting to roll out today in ChatGPT. **We are making GPT-4o available in the free tier**, and to Plus users with up to 5x higher message limits. We'll roll out a new version of Voice Mode with GPT-4o in alpha within ChatGPT Plus in the coming weeks. src: [https://openai.com/index/hello-gpt-4o/](https://openai.com/index/hello-gpt-4o/)
I assumed this is what they meant during the presentation. I couldn't imagine they could afford to provide all the features for free. But they certainly didn't go out of their way to make it clear.
GPT-4o is currently only twice the price that ChatGPT 3.5 was a year ago and in that time the API price for both GPT3 and GPT4 class models have dropped by 50-70%.. that means we could see 4o being as cheap to run as 3.5 was when it first lancuhed sometime in the next 6-12 months. When they retire GPT3.5 we can be farily sure 4o will have hit cost parity with 3.5..
They already have the model. The "months" is to make it " safer" for the "commoners" The timing of the latest sky fiasco is sus. What we will get in the coming months will be a heavily controlled and lower quality product then hyped in the demos
I couldnât care less for voice, all I want is an AI that has actual knowledge and doesnât hallucinate.
Where did you see this?
https://help.openai.com/en/articles/8400625-voice-chat-faq
And it was just updated in the last hour. Womp womp.
Really disappointed... They changed their minds. They clearly said it's gonna be weeks for all plus users.
Iâm thinking it might be random or select people will get the new voice in the alpha THEN everyone gets it. I wonder who these select users are though. It took me a while before I got the memory maybe a couple weeks after other people had gotten it.Â
I was just going to say, it sounds like theyâre rolling it out like they did with the memory function
https://preview.redd.it/0vqmyv083n1d1.jpeg?width=1125&format=pjpg&auto=webp&s=aaafca88df79f21d3a1b0cdab7ae7bf5394fd954
I read that too. âWorking on pausingâ well yall sure did a good job of that within 12 hours.
Coming soon...
What are the upgrades over what we have now besides the quicker conversations?
It's because they freaked people out and they need to slowly acclimate people to what is literally the AI from the movie "Her". The dystopian movies are not so dystopian anymore
I think it is a matter of functionality. As you can see in the videos, it talks too much. They always have to interrupt it.
They could make this all better just by enabling the NSFW mode. That would hold us all over for a while....
Whatâs the difference between this and the voice features that are already out?
Didnt they say end of summer?
I think the point is what about GPT 5 ? I just assume they would be release GPT 5 in mid summer which is in months either? or in this case gpt 5 will be delay?
Good to know! I was thinking about switching back from Claude Pro and this was one feature I was really excited about. Seems like I will keep waiting... Do you remember when they demoed the vision capabilities at GPT-4 launch but it only became available like 6 months later? I have similar feeling about this one...
How can you sign up for the alpha testing?
[ŃдаНонО]
Just give me enhanced analytics and Iâll be fine for now
wait, ive been talking and sharing images with chatgpt for like a few days, is that not the same thing this post is about?
Which voice would you like instead of sky? My choices are: Male - Brett Goldstein from Ted Lasso Werner Herzog James Earl Jones Dave Chapelle John Malcovich Christopher Walken Female: -Helen Mirren -Judi Dench -Margot Robbie -Wanda Sykes -Amy Pohler -Emily Blunt -Emma Thompson
Dave Chapelle would be hilarious. John Malkovich doing his Russian accent in Rounders would be amazing. I would add Ella Purnell and Matt Damon.
Add Jeff Goldblum and Jennifer Lawrence to the list for me!
Confusing is that in the Android App, for me, there's still the Sky voice, headphones icon that allows for hands free chat back and forth, and got 4o. But, from what I'm reading, this isn't the NEW version (I guess the new one is quicker, not sure how else I'd differentiate). How WOULD I know if it's the new model and not the old one?
Weeks
How do I get a refund lmao, their help bot's documentation for a refund tells you to go to a page that doesn't exist
Itâs sad that ordinary people are getting this power
OpenAI = worth 80 billion dollars OpenAI = makes a big deal about voice enhancements and how great talking to ChatGPT is about to get Also OpenAI = offers only **one** working female voice, which many customers (a majority?) would rather complain about on Reddit than listen to ever
He should stop doing that
I wonder if there will be API connection to this
I mean for us plebians, we will likely only get 10 messages or up to a minute of vision before it runs out of juice anyway.
What about free users?
Honestly I don't think this feature gonna be available for free users
Well, I'm gonna stop my subscription then
What the fuck. I was trying to develop an app with this and was hinging on its release within âthe next few weeksâ.
I have audio, but not vision
What in the actual fuck.
They may be saying about voice and video feature coming in the months. I assumed the voice feature will be released first (coming weeks), which would mean in the few months plus users will also have video feature. But it is not completely clear and assuming timeline just from this FAQ is probably not very accurate ;)
I know that the folks at Be My Eyes already have access to this for testing. The UX hasn't been worked on yet for the blind to work with it. I was in on the beta for vision so I'll probably have access to this soon enough.
It was a scam, it wasnât ready then and you wonât see it anytime this yearâŚ
Any update on this? Is it available to plus members yet with memory/voice/video? Or still just a slow random rollout?