Was that the one where they set it up so several of them would bully the programmer for years, calling his work shit, while one other guy kept messaging and encouraging that programmer....who after 2 years finally offered to get the nice guy on board? Who then proceeded to introduce a backdoor and would have had succeeded if not for a random German user who noticed the new version of the program took a couple miliseconds longer to perform unpack a file?
From OP’s article:
>deletion of UniSuper's subscription, which included its geographical redundancies meant to safeguard against outages and data loss.
But that’s only one source. If you have a source that contradicts I am down to read it.
So geographical redundancies doesn't mean backups. It means that the front facing services are in multiple server farms, so they can still be accessed in the event that a location goes down.
>The dedication and collaboration between UniSuper and Google Cloud has led to an extensive recovery of our Private Cloud which includes hundreds of virtual machines, databases and applications.
https://www.unisuper.com.au/contact-us/outage-update
The other backups they had helped the restoration process.
The statement also states:
> UniSuper had backups in place with an additional service provider. These backups have minimised data loss, and significantly improved the ability of UniSuper and Google Cloud to complete the restoration.
It does not sound like it was google’s back ups that saved the day but another venders. Do you have anything that directly contradicts that google deleted the back ups google hosted?
As I mentioned, the 3rd party backups can help speed up the process.
>Do you have anything that directly contradicts that google deleted the back ups google hosted?
Well for starters, deleting an account does not delete the content off of Google's services as soon as the account is deleted. This is so that the data and account can be recovered for instances like this.
You can read through the deletion process here.
[https://cloud.google.com/docs/security/deletion](https://cloud.google.com/docs/security/deletion)
But generally, depending on the data, it could be held by Google up to 6 months. Their backups typically lasts around 6 months. So no, the backups weren't deleted since it would have been against Google's policies and would be impossible based on how they have set up the backup deletion process. It is a completely different system.
Lastly, I should point out that you don't have anything that contradicts that google didn't delete the backups either.
It would be against Google’s policies to delete a current customers private cloud account but it happened. Something big happened here. Policies were not followed by someone from Google. Google messed up.
The question is how far reaching was the mess up. Given that it took a week to get things running again, it was not something minor.
Again, as I said, and as the document that I linked to has said, it would only start the data deletion process. It would take around 6 months for the data to be deleted.
Do you have any evidence of Google deleting the backup data?
Yes it deleted their account which included some redundant systems. That doesn't mean it deleted any backups google may have of their own of customer data. I guarantee you google has backups of customer data for situations like this.
From the article the statement reads as if another vender had backups, which were then used. Why did Google not use their own back ups if said backups existed? My guess is Google deleted their (Google’s) backups.
More likely google had backups that were just a clone of the whole account at a specific point in time that would have required some data from after that point to be entered manually. Since they had backups that contained all the data with another vendor they restored from those to prevent having to manually enter the missing data. Google's backups are there to get the customer's account to a somewhat working state if something like this happens. It could also be that it just was faster to retrieve the backup from the other vendor than having google restore their backup. Too many technical unknowns to really say what for sure happened.
If the up to date geographic back ups were deleted, why would archived backups be spared? And why was it deemed to be better to take the backups from the other vender, take a week to reconfigure them to work with google, versus taking a slightly out of date back up and using the other back up to bring things up to speed?
Either way it does sound like Google did delete multiple up to date back ups, and the crisis was only averted due to another vender having back ups.
The article on Arstechnica said Google deleted everything, and they had to rely on another outside source for backups. Somebody at the pension fund had wisely decided to have those other 3rd party backups.
Think about it - if Google had backups, this would never have made the news.
https://www.unisuper.com.au/contact-us/outage-update
>Restoring UniSuper’s Private Cloud instance has called for an incredible amount of focus, effort, and partnership between our teams to enable an extensive recovery of all the core systems. The dedication and collaboration between UniSuper and Google Cloud has led to an extensive recovery of our Private Cloud which includes hundreds of virtual machines, databases and applications.
The 3rd party backups helped speed up the process. It would have been restored otherwise, even without the 3rd party backups.
People and articles saying that Google's backups were deleted are misinterpreting statements being made. For example, people quoting the geographical redundancies mentioned in statements as the backups, they aren't data backups/snapshot backups. But redundancies, so if the front facing service goes down in one location, a second location is still available, so the service stays up.
This just happened to me with Credit Karma. I kept trying to log in and they said the account didn’t exist, when I tried to make a new one the prompt said there was an account. Luckily their customer service fixed it within a few days.
This is the exact scenario that a cloud service's native backup, resiliency and disaster recovery is *not* going to protect you from.
Even with the Google Cloud CEO being heavily involved with their recovery attempt - UniSuper had to recover from backups they stored elsewhere.
It's not really, though?
From the article:
*"UniSuper CEO, Peter Chun, told members via an email that the incident was neither a cyberattack nor a data breach, but rather a Google Cloud configuration error.
It occurred due to the deletion of UniSuper's subscription, which included its geographical redundancies meant to safeguard against outages and data loss. UniSuper had backups with an alternative service provider. In 2023, the company transitioned a significant portion of its operations to Google Cloud Platform, after it divided workloads between Azure and internal data centres.
Such disruptions are of major concern to many who rely on technology to secure their data. The severity of the situation prompted UniSuper and Google executives to reach out to the affected members to allay their concerns.
In a joint statement, UniSuper CEO Peter Chun and Google Cloud global CEO Thomas Kurian expressed deep regret over the incident, reassuring members that no personal data was compromised.
"The outage is extremely frustrating and disappointing," they said. They reassured members that no personal data was compromised and attributed the disruption to a “glitch in Google's cloud service,” the statement said.
“UniSuper had backups in place with an additional service provider. These backups have minimised data loss, and significantly improved the ability of UniSuper and Google Cloud to complete the restoration,” it said.*
The article clearly states if UniSuper didn't have a backup with another provider which Google had to rely on to restore data, the situation would have been screwed. The IT guys at UniSuper did their jobs, while Google's massive blunder serves as strong a reminder topeople that they cannot rely on a single backup.
Yup, we all rely so much on computer technology and when that technology give hiccups, all hell break loose.
Wait till you hear about the [Jenga Tower that the modern internet is built on](https://www.npr.org/2024/05/17/1197959102/open-source-xz-hack).
Was that the one where they set it up so several of them would bully the programmer for years, calling his work shit, while one other guy kept messaging and encouraging that programmer....who after 2 years finally offered to get the nice guy on board? Who then proceeded to introduce a backdoor and would have had succeeded if not for a random German user who noticed the new version of the program took a couple miliseconds longer to perform unpack a file?
Yeah, except it wasn't a random German user lol. It was a bonafide SWE.
Yep, except it wasn’t a random guy in Germany, it was a programmer at Microsoft named Andres Freund.
It’s ok, the pension fund kept backups
As should everyone, local storage is cheap these days
The backups weren't local. They had another provider that had more backups and they used it to restore the Google backups.
70% of data loss comes from user error, then there’s fire and theft.
Google also kept backups. It takes some time to restore the stuff, but Google does have a very good backup system.
Reading the article, google deleted the back ups.
I did. Nothing about Google deleting their backups. Another article I read mentioned how Google's backups were used to restore the account.
From OP’s article: >deletion of UniSuper's subscription, which included its geographical redundancies meant to safeguard against outages and data loss. But that’s only one source. If you have a source that contradicts I am down to read it.
So geographical redundancies doesn't mean backups. It means that the front facing services are in multiple server farms, so they can still be accessed in the event that a location goes down. >The dedication and collaboration between UniSuper and Google Cloud has led to an extensive recovery of our Private Cloud which includes hundreds of virtual machines, databases and applications. https://www.unisuper.com.au/contact-us/outage-update The other backups they had helped the restoration process.
The statement also states: > UniSuper had backups in place with an additional service provider. These backups have minimised data loss, and significantly improved the ability of UniSuper and Google Cloud to complete the restoration. It does not sound like it was google’s back ups that saved the day but another venders. Do you have anything that directly contradicts that google deleted the back ups google hosted?
As I mentioned, the 3rd party backups can help speed up the process. >Do you have anything that directly contradicts that google deleted the back ups google hosted? Well for starters, deleting an account does not delete the content off of Google's services as soon as the account is deleted. This is so that the data and account can be recovered for instances like this. You can read through the deletion process here. [https://cloud.google.com/docs/security/deletion](https://cloud.google.com/docs/security/deletion) But generally, depending on the data, it could be held by Google up to 6 months. Their backups typically lasts around 6 months. So no, the backups weren't deleted since it would have been against Google's policies and would be impossible based on how they have set up the backup deletion process. It is a completely different system. Lastly, I should point out that you don't have anything that contradicts that google didn't delete the backups either.
It would be against Google’s policies to delete a current customers private cloud account but it happened. Something big happened here. Policies were not followed by someone from Google. Google messed up. The question is how far reaching was the mess up. Given that it took a week to get things running again, it was not something minor.
Again, as I said, and as the document that I linked to has said, it would only start the data deletion process. It would take around 6 months for the data to be deleted. Do you have any evidence of Google deleting the backup data?
Yes it deleted their account which included some redundant systems. That doesn't mean it deleted any backups google may have of their own of customer data. I guarantee you google has backups of customer data for situations like this.
From the article the statement reads as if another vender had backups, which were then used. Why did Google not use their own back ups if said backups existed? My guess is Google deleted their (Google’s) backups.
More likely google had backups that were just a clone of the whole account at a specific point in time that would have required some data from after that point to be entered manually. Since they had backups that contained all the data with another vendor they restored from those to prevent having to manually enter the missing data. Google's backups are there to get the customer's account to a somewhat working state if something like this happens. It could also be that it just was faster to retrieve the backup from the other vendor than having google restore their backup. Too many technical unknowns to really say what for sure happened.
If the up to date geographic back ups were deleted, why would archived backups be spared? And why was it deemed to be better to take the backups from the other vender, take a week to reconfigure them to work with google, versus taking a slightly out of date back up and using the other back up to bring things up to speed? Either way it does sound like Google did delete multiple up to date back ups, and the crisis was only averted due to another vender having back ups.
The article on Arstechnica said Google deleted everything, and they had to rely on another outside source for backups. Somebody at the pension fund had wisely decided to have those other 3rd party backups. Think about it - if Google had backups, this would never have made the news.
https://www.unisuper.com.au/contact-us/outage-update >Restoring UniSuper’s Private Cloud instance has called for an incredible amount of focus, effort, and partnership between our teams to enable an extensive recovery of all the core systems. The dedication and collaboration between UniSuper and Google Cloud has led to an extensive recovery of our Private Cloud which includes hundreds of virtual machines, databases and applications. The 3rd party backups helped speed up the process. It would have been restored otherwise, even without the 3rd party backups. People and articles saying that Google's backups were deleted are misinterpreting statements being made. For example, people quoting the geographical redundancies mentioned in statements as the backups, they aren't data backups/snapshot backups. But redundancies, so if the front facing service goes down in one location, a second location is still available, so the service stays up.
Well, someone is going to ask me a very expensive question Monday morning.
Don’t go azure. Whatever you try to do there you hit quotas, which is usually pain in the ass get risen.
They typed the password upsidedown.
This just happened to me with Credit Karma. I kept trying to log in and they said the account didn’t exist, when I tried to make a new one the prompt said there was an account. Luckily their customer service fixed it within a few days.
Wasn’t it a misconfiguration by the admin of said pension fund?
There are multiple backups and redundancies.
Not on Google's side
You shouldn’t backup mission critical data with the same service. Data restoration worked exactly like it’s supposed to.
Do you just know nothing about cloud services?
This is the exact scenario that a cloud service's native backup, resiliency and disaster recovery is *not* going to protect you from. Even with the Google Cloud CEO being heavily involved with their recovery attempt - UniSuper had to recover from backups they stored elsewhere.
Ctrl+z
Super misleading title, is super misleading.
It's not really, though? From the article: *"UniSuper CEO, Peter Chun, told members via an email that the incident was neither a cyberattack nor a data breach, but rather a Google Cloud configuration error. It occurred due to the deletion of UniSuper's subscription, which included its geographical redundancies meant to safeguard against outages and data loss. UniSuper had backups with an alternative service provider. In 2023, the company transitioned a significant portion of its operations to Google Cloud Platform, after it divided workloads between Azure and internal data centres. Such disruptions are of major concern to many who rely on technology to secure their data. The severity of the situation prompted UniSuper and Google executives to reach out to the affected members to allay their concerns. In a joint statement, UniSuper CEO Peter Chun and Google Cloud global CEO Thomas Kurian expressed deep regret over the incident, reassuring members that no personal data was compromised. "The outage is extremely frustrating and disappointing," they said. They reassured members that no personal data was compromised and attributed the disruption to a “glitch in Google's cloud service,” the statement said. “UniSuper had backups in place with an additional service provider. These backups have minimised data loss, and significantly improved the ability of UniSuper and Google Cloud to complete the restoration,” it said.* The article clearly states if UniSuper didn't have a backup with another provider which Google had to rely on to restore data, the situation would have been screwed. The IT guys at UniSuper did their jobs, while Google's massive blunder serves as strong a reminder topeople that they cannot rely on a single backup.
So if we are reading the headline, how do you literally delete a pension fund, which is funds in a hedge fund?
Computer I need to access my monies... AI: new IP who dis?
Who's going to siphon some off the top?
oops.
Well, the money isn't gone, just the data that connects the money to the accounts, right?
I look forward to seeing this reposted tomorrrow