"what is cyberattack and can I eat it?"
Ultracrepidarian
What does he mean by THE building? Is this a building on Syncrude or Suncor sites? At Suncor refinery? Fort Mcmurray office building? Edmonton office building? Calgary office building?This quote is hidden because you are ignoring this member. Show Quote
They already confirmed on the weekend we would get paid, and we did on payday yesterday.
They already communicated it is likely 2weeks for criticals, minimum 2 weeks for non-criticals.
What I've been saying is, they're layoff list just made itself. Anyone too stupid to not click a malware link, when you literally get yearly mandatory training on this, is obviously not offering any value to the company. Hopefully they hand slips to anyone that clicked on this.This quote is hidden because you are ignoring this member. Show Quote
Contractors are only not being paid because that gets done through SAP. And SAP is fucked for now.This quote is hidden because you are ignoring this member. Show Quote
SAP is fucked, "for now"?!!!?
That's cuter than a puppy falling in its own shit.
Touche. I'll give you that one. hahaThis quote is hidden because you are ignoring this member. Show Quote
I was told it was the Calgary petro-canada building. My buddy tends to exaggerate things, so don't really know. All my knowledge about the event it from TXT messages, one or two a day, haven't had in-person chat in a while. I know that it will be a while for systems to come back up, I think their timeline is wishful thinking.This quote is hidden because you are ignoring this member. Show Quote
I can tell you that for us it was a blessing in disguise. We were able to restore almost everything from the cloud using Veeam, and what we couldn't restore was never backed up and thus was not important. Think we lost about 30% of our servers, so it was a good cleanup, as many teams have servers that they never use and never tell anyone...and they just sit dormant for years. We have made many improvements and redundancies since, so in a way it was a net-positive for us. Definitely opened the budget at that time.This quote is hidden because you are ignoring this member. Show Quote
For smaller orgs, is it standard to have backups in something like amazon glacier or similar?
This quote is hidden because you are ignoring this member. Show Quote
This is what I don't understand. With so many good options to create backups there should be almost no excuse for most to be able to restore and be up pretty quick.This quote is hidden because you are ignoring this member. Show Quote
We have a client who got hit with ransomware and they are now looking to do a complete rebuild because of data loss and no good backups.
Backups are critical for any business and there is no excuse to be not doing them.
Very little in traditional corps is offloaded to Amazon Glacier. Typically unless you're already leveraging AWS, you don't do that.This quote is hidden because you are ignoring this member. Show Quote
In a traditional corp you'd have something like tape backups with offsite storage at something like Iron Mountain.
Everyone saying "just restore from backup", more than likely, like most high profile ransomwares, it's actual an older hack that was dormant. The ransomware got into the backups, and if they restore those, it would be instantly nuked the second you brought it online, even air gapped, because it's going to be related to something like a timestamp.
What's the point of hacking someone and encrypting all their shit, if they can just restore it from backup?
This quote is hidden because you are ignoring this member. Show QuoteThis quote is hidden because you are ignoring this member. Show Quote
Veeam has an option to scan before restore, never used it but seen it there. In our case, we teamed up with a security company that was being used by some of our partner companies already and is highly regarded, and had their agent installed on machines off-net, before they were allowed to connect to the network. This product was much better than anything we ever had and could analyze that is going on and block things as they are detected. They did a forensic thing and and blocked known signatures ect. Seemed to have worked well in our case. We also knew the files that were being executed (via scheduled task) and blew those files away before bringing machines online. Been a few years now, so that process seemed to have worked well enough.This quote is hidden because you are ignoring this member. Show Quote
Many companies don't bother with backups seriously, so think that's the hope of these hackers. Even if you restore to last nights backup for example, you are still kind of hooped for things that happened during that day, especially with financial transactions ect, back to pen and paper and manual remediation.
Since then we have improved our backup posture big time, SAN snapshots, SAN Snapshots for local backup copies, hardened Linux repositories with immutability, instant offload to Azure cloud moment backup is made, Azure backup immutability, Azure backups with Veeam, Azure backups with Azure Backup for few critical things, Zerto replication..... Hoping between all those things we would be able to recover much quicker if it ever happens again. Lots of investment into general cybersecurity tools as well after that event. So far so good...*fingers crossed*
Most of these hacks target backups, they wipe windows shadow copies which are used by versioning if you still run on-prem file servers, they look for all backup vendors and try to render backups useless ect. So even if you have backups, things can still take some time. In our case they took out Veeam servers (along with all other servers) and all our backups were also encrypted....but luckily we had everything set to offload to Azure, so we spun up a Veeam server in Azure, connected to our buckets, and started recovering everything. Veeam doesn't have a central catalogue, so you can import your backup files and start restoring right away basically.This quote is hidden because you are ignoring this member. Show Quote
I've heard good things about Veeam. I believe some corps are using it post-ransomware as well, so I'm not surprised. But large corps like Suncor tend to move slow, and things aren't upgraded/processes aren't updated until something happens.This quote is hidden because you are ignoring this member. Show Quote
It's similar to safety processes. Every safety process is written in blood. Every IT process is written in lost costs and downtime. I've previously worked in IT at both large and small O&G Corps, that being said, I haven't worked Corp IT in O&G in about 4 years, since I made the move out of the industry. But it was impossible to get approval on anything other than renewals until something broke, or one of the C-suite was inconvenienced.
Backups are only good if you test the restore, and straight tape backups take forever to restore, if they're not compromised. DR isn't DR until it's proven.
This quote is hidden because you are ignoring this member. Show QuoteThis quote is hidden because you are ignoring this member. Show Quote
This was us at around 100 employees. We also kept about 3 months worth of backups on rotation disconnected and on site when they come back from Iron Mountain. The key is backups should be completely offline. Always online and accessible backups are convenient, but is a massive failure point.This quote is hidden because you are ignoring this member. Show Quote
We backed up at multiple layers. VMs/Server image backups, as well as data backups. Our SQL servers back then dumped all their backups and diffs hourly, which gets offloaded to the backup servers, and tapes/drives pulled weekly. So at a minimum, we could fresh rebuild and just restore the data in a worst case scenario where everything was breached. More work but it's possible.This quote is hidden because you are ignoring this member. Show Quote
Originally posted by SEANBANERJEE
I have gone above and beyond what I should rightfully have to do to protect my good name
I think immutability really helped with this. With it enabled, I can't, as a full admin, even do anything with them until their retention window expires. It's a good option for cloud providers, they basically got you by the balls for the duration of your immutability policy. We have this enabled now for all out backups, both short term dailies and all monthly/yearlies ect. Conventional thinking would lead you to believe that you can just blow away the storage account or the resource group and that will delete your immutable backups, but that's not the case, as you basically can't do anything with them, or anything they tie into for the entire duration.This quote is hidden because you are ignoring this member. Show Quote
This quote is hidden because you are ignoring this member. Show QuoteThis quote is hidden because you are ignoring this member. Show Quote
Backup doesn't matter if they got your creds.
They are usually in your system for weeks before acting. By then, they should know most of your admin passwords or accounts of importance used to maintain your storage/backup systems. So cloud or not, it doesn't matter unless you set your backup to be immutable (IE, pay more).
Most major corp's storage/backup system has switched to Multi admin verification instead of just MFA. IE, nothing get deleted unless 2 human submit their creds. Nobody should be using root/admin/domain admin account to do anything. you elevate access when needed.
Smaller orgs tends to have lower manpower and probably still use standard root/domain admin to do everything. That's how you get pwned.
End of the day, your org basically has to put an number in outage cost in order to spend properly on defense. It will happen sooner or later. We just send out a phishing test about what people want for Stampede Breakfast, 10% people gave out their creds after receiving the email. People are stupid.This quote is hidden because you are ignoring this member. Show Quote
Corps need to drop email as communication standard or send out links to anything.
Last edited by Xtrema; 06-29-2023 at 11:39 AM.
Our internal security team does constant phishing tests. They sent out a legit email that looked so awful, and everyone ran to slack to report it, only to find out it was legit.
This quote is hidden because you are ignoring this member. Show QuoteThis quote is hidden because you are ignoring this member. Show Quote
"An ounce of prevention is worth a pound of cure".
Because of this my experience during peak wannacry was as pleasant as could be. I'll paraphrase, but I was first notified that honeypot fileshares were being accessed by offending user (in AP dept). I lookup the office ext, office switchport #, see the port unusually at 100% util and disable it. Calmly grab a spare laptop and walk across the building, politely engage the user whose only awareness was their internet radio suddenly cutting out. Soft interrogation, remove offending PC, reconnect user with their terminal session & additional instruction. Spend the rest of the morning double checking access & restore logs.
I sympathize with Suncor's IT dept. By letting this get out of control it's like the worst version of a total power outage in a datacenter built on Indian burial grounds during a solar eclipse while you're away on scheduled vacation.
I remember doing exactly this in the past. The worst thing was that the tapes always had some stupid write error and rarely could you actually restore what you needed.This quote is hidden because you are ignoring this member. Show Quote
Yea, the error rates during recovery simulations was brutal. We went from tapes to HDD's pretty early on.This quote is hidden because you are ignoring this member. Show Quote
Originally posted by SEANBANERJEE
I have gone above and beyond what I should rightfully have to do to protect my good name